Online Payment Fraud Detection¶

By Pradyunya Chunchwar, Umesh shelare, Yash jambhulkar, Aman Verma¶

In [1]:
#importing a required library
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
In [2]:
payments=pd.read_csv("onlinefraud.csv")
In [3]:
payments.head()
Out[3]:
step type amount nameOrig oldbalanceOrg newbalanceOrig nameDest oldbalanceDest newbalanceDest isFraud isFlaggedFraud
0 1 PAYMENT 9839.64 C1231006815 170136.0 160296.36 M1979787155 0.0 0.0 0 0
1 1 PAYMENT 1864.28 C1666544295 21249.0 19384.72 M2044282225 0.0 0.0 0 0
2 1 TRANSFER 181.00 C1305486145 181.0 0.00 C553264065 0.0 0.0 1 0
3 1 CASH_OUT 181.00 C840083671 181.0 0.00 C38997010 21182.0 0.0 1 0
4 1 PAYMENT 11668.14 C2048537720 41554.0 29885.86 M1230701703 0.0 0.0 0 0

Exploratory Data Analysis¶

In [4]:
payments.shape
Out[4]:
(6362620, 11)
In [5]:
payments['isFraud'].value_counts()
Out[5]:
0    6354407
1       8213
Name: isFraud, dtype: int64
In [6]:
fraud_count = payments.groupby('type')['isFraud'].sum().reset_index()
fraud_count
Out[6]:
type isFraud
0 CASH_IN 0
1 CASH_OUT 4116
2 DEBIT 0
3 PAYMENT 0
4 TRANSFER 4097

DataSet is unbalanced¶

In [7]:
legit=payments[payments.isFraud==0] 
fraud=payments[payments.isFraud==1]
In [8]:
print(legit.shape)
print(fraud.shape)
(6354407, 11)
(8213, 11)
In [9]:
legit_sample=legit.sample(n=8213)
fraud_sample=fraud.sample(n=8213)

Balancing the dataset¶

In [10]:
new_data=pd.concat([legit_sample,fraud_sample], axis=0)
new_data = new_data.sample(frac=1, random_state=123).reset_index(drop=True)
In [11]:
new_data.groupby('isFraud').mean()
Out[11]:
step amount oldbalanceOrg newbalanceOrig oldbalanceDest newbalanceDest isFlaggedFraud
isFraud
0 244.598685 1.877650e+05 8.759508e+05 899497.961936 1.149611e+06 1.291266e+06 0.000000
1 368.413856 1.467967e+06 1.649668e+06 192392.631836 5.442496e+05 1.279708e+06 0.001948

ploting pie chart for type of payments¶

In [12]:
Type=new_data['type'].value_counts()
data=Type.values
labels=Type.index
colors=sns.color_palette('pastel')[0:8]
plt.pie(data,labels=labels,colors=colors,autopct='%.0f%%')
plt.title('Distribution of Transaction type')
plt.legend(labels,loc='best',bbox_to_anchor=(1,0.5))
plt.show()
In [13]:
fraud_count = new_data.groupby('type')['isFraud'].sum().reset_index()
fraud_count
Out[13]:
type isFraud
0 CASH_IN 0
1 CASH_OUT 4116
2 DEBIT 0
3 PAYMENT 0
4 TRANSFER 4097
In [14]:
new_data['type'].value_counts()
Out[14]:
CASH_OUT    6963
TRANSFER    4788
PAYMENT     2780
CASH_IN     1843
DEBIT         52
Name: type, dtype: int64
In [15]:
new_data
Out[15]:
step type amount nameOrig oldbalanceOrg newbalanceOrig nameDest oldbalanceDest newbalanceDest isFraud isFlaggedFraud
0 256 CASH_OUT 129917.80 C1116415695 0.00 0.00 C1063213841 417307.02 547224.82 0 0
1 605 CASH_OUT 269692.53 C165949141 269692.53 0.00 C895796754 2248741.77 2518434.30 1 0
2 604 CASH_OUT 203848.61 C972076010 203848.61 0.00 C785941679 188810.39 392659.00 1 0
3 437 CASH_OUT 16207.04 C458723859 16207.04 0.00 C326321916 136857.72 153064.76 1 0
4 153 TRANSFER 1711534.01 C2035178632 0.00 0.00 C776917345 3746831.76 5458365.76 0 0
... ... ... ... ... ... ... ... ... ... ... ...
16421 542 CASH_IN 32269.89 C1592888358 45831.00 78100.89 C272800547 624031.14 591761.26 0 0
16422 212 CASH_OUT 1220.80 C248395903 1220.80 0.00 C154068331 338688.89 339909.69 1 0
16423 352 PAYMENT 33269.66 C1166055795 0.00 0.00 M155199303 0.00 0.00 0 0
16424 223 TRANSFER 468446.77 C1278233953 468446.77 0.00 C1583808597 0.00 0.00 1 0
16425 410 CASH_OUT 4900487.15 C1615320222 4900487.15 0.00 C1322611502 0.00 4900487.15 1 0

16426 rows × 11 columns

In [16]:
new_data.isnull()
Out[16]:
step type amount nameOrig oldbalanceOrg newbalanceOrig nameDest oldbalanceDest newbalanceDest isFraud isFlaggedFraud
0 False False False False False False False False False False False
1 False False False False False False False False False False False
2 False False False False False False False False False False False
3 False False False False False False False False False False False
4 False False False False False False False False False False False
... ... ... ... ... ... ... ... ... ... ... ...
16421 False False False False False False False False False False False
16422 False False False False False False False False False False False
16423 False False False False False False False False False False False
16424 False False False False False False False False False False False
16425 False False False False False False False False False False False

16426 rows × 11 columns

In [17]:
sns.heatmap(new_data.isnull(),yticklabels=False,cbar=False,cmap="viridis")
Out[17]:
<AxesSubplot:>

checking null value¶

In [18]:
new_data.isnull().sum()
Out[18]:
step              0
type              0
amount            0
nameOrig          0
oldbalanceOrg     0
newbalanceOrig    0
nameDest          0
oldbalanceDest    0
newbalanceDest    0
isFraud           0
isFlaggedFraud    0
dtype: int64
In [19]:
new_data.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16426 entries, 0 to 16425
Data columns (total 11 columns):
 #   Column          Non-Null Count  Dtype  
---  ------          --------------  -----  
 0   step            16426 non-null  int64  
 1   type            16426 non-null  object 
 2   amount          16426 non-null  float64
 3   nameOrig        16426 non-null  object 
 4   oldbalanceOrg   16426 non-null  float64
 5   newbalanceOrig  16426 non-null  float64
 6   nameDest        16426 non-null  object 
 7   oldbalanceDest  16426 non-null  float64
 8   newbalanceDest  16426 non-null  float64
 9   isFraud         16426 non-null  int64  
 10  isFlaggedFraud  16426 non-null  int64  
dtypes: float64(5), int64(3), object(3)
memory usage: 1.4+ MB

Data Visualization¶

In [20]:
#pip install ydata-profiling
In [21]:
from pandas_profiling import ProfileReport
C:\Users\msi\AppData\Local\Temp/ipykernel_8808/2274191625.py:1: DeprecationWarning: `import pandas_profiling` is going to be deprecated by April 1st. Please use `import ydata_profiling` instead.
  from pandas_profiling import ProfileReport
In [23]:
report= ProfileReport(new_data,title="Fraud Detection",explorative=True)
In [24]:
report.to_widgets()
Summarize dataset:   0%|          | 0/5 [00:00<?, ?it/s]
Generate report structure:   0%|          | 0/1 [00:00<?, ?it/s]
Render widgets:   0%|          | 0/1 [00:00<?, ?it/s]
VBox(children=(Tab(children=(Tab(children=(GridBox(children=(VBox(children=(GridspecLayout(children=(HTML(valu…
In [25]:
report.to_notebook_iframe()
Render HTML:   0%|          | 0/1 [00:00<?, ?it/s]
In [26]:
#pip install autoviz
In [27]:
from autoviz.AutoViz_Class import AutoViz_Class
Imported v0.1.58. After importing, execute '%matplotlib inline' to display charts in Jupyter.
    AV = AutoViz_Class()
    dfte = AV.AutoViz(filename, sep=',', depVar='', dfte=None, header=0, verbose=1, lowess=False,
               chart_format='svg',max_rows_analyzed=150000,max_cols_analyzed=30, save_plot_dir=None)
Update: verbose=0 displays charts in your local Jupyter notebook.
        verbose=1 additionally provides EDA data cleaning suggestions. It also displays charts.
        verbose=2 does not display charts but saves them in AutoViz_Plots folder in local machine.
        chart_format='bokeh' displays charts in your local Jupyter notebook.
        chart_format='server' displays charts in your browser: one tab for each chart type
        chart_format='html' silently saves interactive HTML files in your local machine
In [28]:
AV = AutoViz_Class()
new_data.to_csv('new_data.csv', index=False)
file="new_data.csv"
In [29]:
AV.AutoViz(file,sep=",",depVar="",dfte=None,header=0,verbose=0,lowess=False,chart_format="svg",max_rows_analyzed=150000,max_cols_analyzed=30,)
Shape of your Data Set loaded: (16426, 11)
#######################################################################################
######################## C L A S S I F Y I N G  V A R I A B L E S  ####################
#######################################################################################
Classifying variables in data set...
    Number of Numeric Columns =  5
    Number of Integer-Categorical Columns =  1
    Number of String-Categorical Columns =  2
    Number of Factor-Categorical Columns =  0
    Number of String-Boolean Columns =  0
    Number of Numeric-Boolean Columns =  2
    Number of Discrete String Columns =  0
    Number of NLP String Columns =  0
    Number of Date Time Columns =  0
    Number of ID Columns =  1
    Number of Columns to Delete =  0
    11 Predictors classified...
        1 variables removed since they were ID or low-information variables
Number of All Scatter Plots = 15
All Plots done
Time to run AutoViz = 6 seconds 

 ###################### AUTO VISUALIZATION Completed ########################
Out[29]:
step type amount nameOrig oldbalanceOrg newbalanceOrig nameDest oldbalanceDest newbalanceDest isFraud isFlaggedFraud
0 23 PAYMENT 11426.55 C689381585 30768.00 19341.45 M1546541454 0.00 0.00 0 0
1 409 CASH_OUT 1050135.78 C866706747 1050135.78 0.00 C1726408560 4365234.19 5415369.97 1 0
2 494 TRANSFER 503620.99 C1267328802 503620.99 0.00 C593181726 0.00 0.00 1 0
3 32 CASH_OUT 404165.06 C254731202 404165.06 0.00 C1749677978 0.00 404165.06 1 0
4 157 PAYMENT 21658.43 C1200898654 1464.73 0.00 M1302249986 0.00 0.00 0 0
... ... ... ... ... ... ... ... ... ... ... ...
16421 233 CASH_OUT 196927.61 C1175510884 41029.00 0.00 C55440432 1333649.79 1530577.41 0 0
16422 177 TRANSFER 239556.06 C759888632 239556.06 0.00 C1568773487 0.00 0.00 1 0
16423 11 PAYMENT 4875.29 C835081554 0.00 0.00 M671919480 0.00 0.00 0 0
16424 152 TRANSFER 102945.99 C488688199 102945.99 0.00 C1626596200 0.00 0.00 1 0
16425 194 CASH_OUT 21741.36 C996937120 21741.36 0.00 C2092944197 1291168.87 1312910.22 1 0

16426 rows × 11 columns

In [ ]:
 

Build the model¶

In [20]:
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
new_data['type']  =encoder.fit_transform(new_data['type'])
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import accuracy_score, confusion_matrix
import matplotlib.pyplot as plt

droping unnecessary attributes¶

In [21]:
df=new_data.drop(columns=['nameOrig','nameDest','step','isFlaggedFraud'],axis=1)
In [22]:
df
Out[22]:
type amount oldbalanceOrg newbalanceOrig oldbalanceDest newbalanceDest isFraud
0 1 129917.80 0.00 0.00 417307.02 547224.82 0
1 1 269692.53 269692.53 0.00 2248741.77 2518434.30 1
2 1 203848.61 203848.61 0.00 188810.39 392659.00 1
3 1 16207.04 16207.04 0.00 136857.72 153064.76 1
4 4 1711534.01 0.00 0.00 3746831.76 5458365.76 0
... ... ... ... ... ... ... ...
16421 0 32269.89 45831.00 78100.89 624031.14 591761.26 0
16422 1 1220.80 1220.80 0.00 338688.89 339909.69 1
16423 3 33269.66 0.00 0.00 0.00 0.00 0
16424 4 468446.77 468446.77 0.00 0.00 0.00 1
16425 1 4900487.15 4900487.15 0.00 0.00 4900487.15 1

16426 rows × 7 columns

Training The Model¶

In [23]:
X=df.drop(columns='isFraud',axis=1)
Y=df['isFraud']
In [24]:
print(X)
       type      amount  oldbalanceOrg  newbalanceOrig  oldbalanceDest  \
0         1   129917.80           0.00            0.00       417307.02   
1         1   269692.53      269692.53            0.00      2248741.77   
2         1   203848.61      203848.61            0.00       188810.39   
3         1    16207.04       16207.04            0.00       136857.72   
4         4  1711534.01           0.00            0.00      3746831.76   
...     ...         ...            ...             ...             ...   
16421     0    32269.89       45831.00        78100.89       624031.14   
16422     1     1220.80        1220.80            0.00       338688.89   
16423     3    33269.66           0.00            0.00            0.00   
16424     4   468446.77      468446.77            0.00            0.00   
16425     1  4900487.15     4900487.15            0.00            0.00   

       newbalanceDest  
0           547224.82  
1          2518434.30  
2           392659.00  
3           153064.76  
4          5458365.76  
...               ...  
16421       591761.26  
16422       339909.69  
16423            0.00  
16424            0.00  
16425      4900487.15  

[16426 rows x 6 columns]
In [25]:
print(Y)
0        0
1        1
2        1
3        1
4        0
        ..
16421    0
16422    1
16423    0
16424    1
16425    1
Name: isFraud, Length: 16426, dtype: int64
In [26]:
X_train,X_test,Y_train,Y_test =train_test_split(X,Y,test_size=0.2,stratify=Y,random_state=2)
In [27]:
print(X.shape,X_train.shape,X_test.shape,Y_test.shape,Y_train.shape)
(16426, 6) (13140, 6) (3286, 6) (3286,) (13140,)

Hypermeter Tunning Using RandomizedSearchCV¶

In [35]:
from sklearn.model_selection import GridSearchCV
param = {
    'hidden_layer_sizes': [(2,),(10,),(50,), (100,), (50,50), (100,100)],
    'activation': ['relu', 'tanh', 'logistic'],
    'alpha': [0.0001, 0.001, 0.01,0.1],
    'momentum': [0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9],
    'verbose':[1],
    'solver':['adam'],
    'max_iter':[1000],
}
mlp =MLPClassifier()
In [36]:
from sklearn.model_selection import RandomizedSearchCV 
model = RandomizedSearchCV(mlp,param, cv=5,n_iter=50, random_state=42)
In [37]:
import time
t1=time.time()
In [38]:
model.fit(X_train, Y_train)
Iteration 1, loss = 0.61954100
Iteration 2, loss = 0.44342315
Iteration 3, loss = 0.35763599
Iteration 4, loss = 0.29211155
Iteration 5, loss = 0.23534014
Iteration 6, loss = 0.21261531
Iteration 7, loss = 0.19625930
Iteration 8, loss = 0.22125450
Iteration 9, loss = 0.22842797
Iteration 10, loss = 0.22135352
Iteration 11, loss = 0.22716298
Iteration 12, loss = 0.21323573
Iteration 13, loss = 0.25015067
Iteration 14, loss = 0.24245776
Iteration 15, loss = 0.22980726
Iteration 16, loss = 0.22861109
Iteration 17, loss = 0.22988466
Iteration 18, loss = 0.23457982
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.64318003
Iteration 2, loss = 0.49506837
Iteration 3, loss = 0.34378080
Iteration 4, loss = 0.29886658
Iteration 5, loss = 0.25774771
Iteration 6, loss = 0.20425774
Iteration 7, loss = 0.21086982
Iteration 8, loss = 0.19956744
Iteration 9, loss = 0.17457452
Iteration 10, loss = 0.16397855
Iteration 11, loss = 0.18728669
Iteration 12, loss = 0.22800075
Iteration 13, loss = 0.20291574
Iteration 14, loss = 0.22862137
Iteration 15, loss = 0.21938559
Iteration 16, loss = 0.18786900
Iteration 17, loss = 0.18866675
Iteration 18, loss = 0.20437504
Iteration 19, loss = 0.21645908
Iteration 20, loss = 0.21520941
Iteration 21, loss = 0.21217928
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.64469573
Iteration 2, loss = 0.49212826
Iteration 3, loss = 0.32291075
Iteration 4, loss = 0.25339167
Iteration 5, loss = 0.26009488
Iteration 6, loss = 0.28343973
Iteration 7, loss = 0.27436638
Iteration 8, loss = 0.22358302
Iteration 9, loss = 0.19777859
Iteration 10, loss = 0.20580976
Iteration 11, loss = 0.21339476
Iteration 12, loss = 0.23747569
Iteration 13, loss = 0.23371140
Iteration 14, loss = 0.22611687
Iteration 15, loss = 0.21862251
Iteration 16, loss = 0.20852701
Iteration 17, loss = 0.20159449
Iteration 18, loss = 0.21796109
Iteration 19, loss = 0.22439088
Iteration 20, loss = 0.22166879
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.61130771
Iteration 2, loss = 0.44619930
Iteration 3, loss = 0.30407268
Iteration 4, loss = 0.24920473
Iteration 5, loss = 0.20410098
Iteration 6, loss = 0.19880491
Iteration 7, loss = 0.20533166
Iteration 8, loss = 0.19704056
Iteration 9, loss = 0.18514051
Iteration 10, loss = 0.18944253
Iteration 11, loss = 0.22333348
Iteration 12, loss = 0.24272998
Iteration 13, loss = 0.22161807
Iteration 14, loss = 0.20794931
Iteration 15, loss = 0.20701988
Iteration 16, loss = 0.21518288
Iteration 17, loss = 0.22652920
Iteration 18, loss = 0.21356639
Iteration 19, loss = 0.22348986
Iteration 20, loss = 0.24508703
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.63808108
Iteration 2, loss = 0.49170198
Iteration 3, loss = 0.34692657
Iteration 4, loss = 0.28665736
Iteration 5, loss = 0.26518885
Iteration 6, loss = 0.23796125
Iteration 7, loss = 0.23926555
Iteration 8, loss = 0.23458156
Iteration 9, loss = 0.22753674
Iteration 10, loss = 0.22085301
Iteration 11, loss = 0.23815431
Iteration 12, loss = 0.22626539
Iteration 13, loss = 0.23138048
Iteration 14, loss = 0.21809385
Iteration 15, loss = 0.22047585
Iteration 16, loss = 0.22122530
Iteration 17, loss = 0.22862917
Iteration 18, loss = 0.23265384
Iteration 19, loss = 0.22799852
Iteration 20, loss = 0.21916346
Iteration 21, loss = 0.23508543
Iteration 22, loss = 0.21572626
Iteration 23, loss = 0.22364642
Iteration 24, loss = 0.22580259
Iteration 25, loss = 0.21708236
Iteration 26, loss = 0.20481186
Iteration 27, loss = 0.20400310
Iteration 28, loss = 0.21346256
Iteration 29, loss = 0.21347024
Iteration 30, loss = 0.21501153
Iteration 31, loss = 0.22279809
Iteration 32, loss = 0.23667742
Iteration 33, loss = 0.24313269
Iteration 34, loss = 0.24188748
Iteration 35, loss = 0.24146270
Iteration 36, loss = 0.23958898
Iteration 37, loss = 0.23877496
Iteration 38, loss = 0.23752186
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.78155832
Iteration 2, loss = 0.75384976
Iteration 3, loss = 0.73029899
Iteration 4, loss = 0.70626320
Iteration 5, loss = 0.68565639
Iteration 6, loss = 0.67086291
Iteration 7, loss = 0.65540592
Iteration 8, loss = 0.64591221
Iteration 9, loss = 0.63130952
Iteration 10, loss = 0.62293810
Iteration 11, loss = 0.61563863
Iteration 12, loss = 0.59627531
Iteration 13, loss = 0.57467868
Iteration 14, loss = 0.56853178
Iteration 15, loss = 0.56452640
Iteration 16, loss = 0.56111446
Iteration 17, loss = 0.55823687
Iteration 18, loss = 0.55582573
Iteration 19, loss = 0.55377844
Iteration 20, loss = 0.55206396
Iteration 21, loss = 0.55056479
Iteration 22, loss = 0.54934147
Iteration 23, loss = 0.54825547
Iteration 24, loss = 0.54732202
Iteration 25, loss = 0.54654488
Iteration 26, loss = 0.54574003
Iteration 27, loss = 0.54663282
Iteration 28, loss = 0.54642763
Iteration 29, loss = 0.54622238
Iteration 30, loss = 0.54605057
Iteration 31, loss = 0.54589870
Iteration 32, loss = 0.54580420
Iteration 33, loss = 0.54566267
Iteration 34, loss = 0.54557962
Iteration 35, loss = 0.54549466
Iteration 36, loss = 0.54542113
Iteration 37, loss = 0.54537413
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.67865797
Iteration 2, loss = 0.67154519
Iteration 3, loss = 0.66740400
Iteration 4, loss = 0.66553417
Iteration 5, loss = 0.66426539
Iteration 6, loss = 0.66172770
Iteration 7, loss = 0.66087406
Iteration 8, loss = 0.66015014
Iteration 9, loss = 0.65955770
Iteration 10, loss = 0.65905758
Iteration 11, loss = 0.65877288
Iteration 12, loss = 0.65833301
Iteration 13, loss = 0.65813735
Iteration 14, loss = 0.65778546
Iteration 15, loss = 0.65774822
Iteration 16, loss = 0.65762690
Iteration 17, loss = 0.65748596
Iteration 18, loss = 0.65743186
Iteration 19, loss = 0.65741875
Iteration 20, loss = 0.65737573
Iteration 21, loss = 0.65730027
Iteration 22, loss = 0.65728722
Iteration 23, loss = 0.65730021
Iteration 24, loss = 0.65723382
Iteration 25, loss = 0.65723192
Iteration 26, loss = 0.65719293
Iteration 27, loss = 0.65717016
Iteration 28, loss = 0.65716519
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.94090787
Iteration 2, loss = 0.92288498
Iteration 3, loss = 0.90518817
Iteration 4, loss = 0.88852150
Iteration 5, loss = 0.87197920
Iteration 6, loss = 0.81379336
Iteration 7, loss = 0.77570172
Iteration 8, loss = 0.75771544
Iteration 9, loss = 0.74136787
Iteration 10, loss = 0.72636598
Iteration 11, loss = 0.71141760
Iteration 12, loss = 0.68214769
Iteration 13, loss = 0.68692169
Iteration 14, loss = 0.65457687
Iteration 15, loss = 0.63370187
Iteration 16, loss = 0.62765623
Iteration 17, loss = 0.62320282
Iteration 18, loss = 0.61829147
Iteration 19, loss = 0.61531051
Iteration 20, loss = 0.61083281
Iteration 21, loss = 0.60919083
Iteration 22, loss = 0.60782792
Iteration 23, loss = 0.60842047
Iteration 24, loss = 0.60859580
Iteration 25, loss = 0.60385397
Iteration 26, loss = 0.60220831
Iteration 27, loss = 0.60038691
Iteration 28, loss = 0.59985958
Iteration 29, loss = 0.59993150
Iteration 30, loss = 0.59233990
Iteration 31, loss = 0.58792159
Iteration 32, loss = 0.58564160
Iteration 33, loss = 0.58420096
Iteration 34, loss = 0.58293191
Iteration 35, loss = 0.58158457
Iteration 36, loss = 0.57982210
Iteration 37, loss = 0.57914918
Iteration 38, loss = 0.57231385
Iteration 39, loss = 0.52640598
Iteration 40, loss = 0.51496519
Iteration 41, loss = 0.50781660
Iteration 42, loss = 0.49976892
Iteration 43, loss = 0.48854081
Iteration 44, loss = 0.48333770
Iteration 45, loss = 0.57643335
Iteration 46, loss = 0.48978010
Iteration 47, loss = 0.47558639
Iteration 48, loss = 0.47210512
Iteration 49, loss = 0.46939386
Iteration 50, loss = 0.46700963
Iteration 51, loss = 0.46410852
Iteration 52, loss = 0.46243442
Iteration 53, loss = 0.46077614
Iteration 54, loss = 0.45929730
Iteration 55, loss = 0.45795666
Iteration 56, loss = 0.45673810
Iteration 57, loss = 0.45543152
Iteration 58, loss = 0.45457064
Iteration 59, loss = 0.45362194
Iteration 60, loss = 0.50993817
Iteration 61, loss = 0.61275708
Iteration 62, loss = 0.60021868
Iteration 63, loss = 0.59705105
Iteration 64, loss = 0.59627003
Iteration 65, loss = 0.59598287
Iteration 66, loss = 0.59590772
Iteration 67, loss = 0.59588314
Iteration 68, loss = 0.59587684
Iteration 69, loss = 0.59586259
Iteration 70, loss = 0.59586306
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.64990075
Iteration 2, loss = 0.64565999
Iteration 3, loss = 0.64506150
Iteration 4, loss = 0.64444525
Iteration 5, loss = 0.64411124
Iteration 6, loss = 0.64286688
Iteration 7, loss = 0.64186834
Iteration 8, loss = 0.64030225
Iteration 9, loss = 0.63858907
Iteration 10, loss = 0.63725803
Iteration 11, loss = 0.63696519
Iteration 12, loss = 0.63761708
Iteration 13, loss = 0.63824918
Iteration 14, loss = 0.64621202
Iteration 15, loss = 0.63690808
Iteration 16, loss = 0.63684755
Iteration 17, loss = 0.63703075
Iteration 18, loss = 0.63701813
Iteration 19, loss = 0.63678546
Iteration 20, loss = 0.63230942
Iteration 21, loss = 0.59558304
Iteration 22, loss = 0.59073018
Iteration 23, loss = 0.58923109
Iteration 24, loss = 0.58821153
Iteration 25, loss = 0.58748347
Iteration 26, loss = 0.58687274
Iteration 27, loss = 0.58637740
Iteration 28, loss = 0.58594069
Iteration 29, loss = 0.58556325
Iteration 30, loss = 0.58527740
Iteration 31, loss = 0.58497782
Iteration 32, loss = 0.58475341
Iteration 33, loss = 0.58454996
Iteration 34, loss = 0.58439427
Iteration 35, loss = 0.58431547
Iteration 36, loss = 0.58412075
Iteration 37, loss = 0.58396196
Iteration 38, loss = 0.58383750
Iteration 39, loss = 0.58371804
Iteration 40, loss = 0.58365345
Iteration 41, loss = 0.58354181
Iteration 42, loss = 0.58350512
Iteration 43, loss = 0.58342009
Iteration 44, loss = 0.58335726
Iteration 45, loss = 0.58333661
Iteration 46, loss = 0.58328711
Iteration 47, loss = 0.58327423
Iteration 48, loss = 0.58322920
Iteration 49, loss = 0.58321582
Iteration 50, loss = 0.58319341
Iteration 51, loss = 0.58316740
Iteration 52, loss = 0.58315342
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 1.32421254
Iteration 2, loss = 1.27210159
Iteration 3, loss = 1.22191946
Iteration 4, loss = 1.17216393
Iteration 5, loss = 1.11845625
Iteration 6, loss = 1.07481324
Iteration 7, loss = 1.03484334
Iteration 8, loss = 1.00084578
Iteration 9, loss = 0.96771788
Iteration 10, loss = 0.93710098
Iteration 11, loss = 0.90844911
Iteration 12, loss = 0.88136405
Iteration 13, loss = 0.85561741
Iteration 14, loss = 0.83176878
Iteration 15, loss = 0.80946680
Iteration 16, loss = 0.78872123
Iteration 17, loss = 0.76950637
Iteration 18, loss = 0.75168159
Iteration 19, loss = 0.73526037
Iteration 20, loss = 0.72006742
Iteration 21, loss = 0.70606933
Iteration 22, loss = 0.69353007
Iteration 23, loss = 0.68307276
Iteration 24, loss = 0.67357640
Iteration 25, loss = 0.66470401
Iteration 26, loss = 0.65645090
Iteration 27, loss = 0.64876507
Iteration 28, loss = 0.64164058
Iteration 29, loss = 0.63500378
Iteration 30, loss = 0.62885433
Iteration 31, loss = 0.62314701
Iteration 32, loss = 0.61781992
Iteration 33, loss = 0.61287536
Iteration 34, loss = 0.60828216
Iteration 35, loss = 0.60398322
Iteration 36, loss = 0.60000005
Iteration 37, loss = 0.59623642
Iteration 38, loss = 0.59263671
Iteration 39, loss = 0.58940298
Iteration 40, loss = 0.58638829
Iteration 41, loss = 0.58358249
Iteration 42, loss = 0.58097066
Iteration 43, loss = 0.57852196
Iteration 44, loss = 0.57626166
Iteration 45, loss = 0.57416257
Iteration 46, loss = 0.57219019
Iteration 47, loss = 0.57036677
Iteration 48, loss = 0.56866298
Iteration 49, loss = 0.56710131
Iteration 50, loss = 0.56562486
Iteration 51, loss = 0.56426889
Iteration 52, loss = 0.56303034
Iteration 53, loss = 0.56184767
Iteration 54, loss = 0.56077869
Iteration 55, loss = 0.55979727
Iteration 56, loss = 0.55886951
Iteration 57, loss = 0.55803465
Iteration 58, loss = 0.55726203
Iteration 59, loss = 0.55653523
Iteration 60, loss = 0.55586983
Iteration 61, loss = 0.55526502
Iteration 62, loss = 0.55471565
Iteration 63, loss = 0.55420758
Iteration 64, loss = 0.55374952
Iteration 65, loss = 0.55334825
Iteration 66, loss = 0.55287527
Iteration 67, loss = 0.55316938
Iteration 68, loss = 0.55418016
Iteration 69, loss = 0.55284709
Iteration 70, loss = 0.55196676
Iteration 71, loss = 0.55133827
Iteration 72, loss = 0.55090444
Iteration 73, loss = 0.55118827
Iteration 74, loss = 0.54991826
Iteration 75, loss = 0.54937731
Iteration 76, loss = 0.54845495
Iteration 77, loss = 0.54956816
Iteration 78, loss = 0.54953005
Iteration 79, loss = 0.54943316
Iteration 80, loss = 0.54935698
Iteration 81, loss = 0.54928733
Iteration 82, loss = 0.54924117
Iteration 83, loss = 0.54919672
Iteration 84, loss = 0.54915985
Iteration 85, loss = 0.54911717
Iteration 86, loss = 0.54910599
Iteration 87, loss = 0.54908024
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.51743401
Iteration 2, loss = 0.27517402
Iteration 3, loss = 0.21243272
Iteration 4, loss = 0.19749565
Iteration 5, loss = 0.20275675
Iteration 6, loss = 0.20087774
Iteration 7, loss = 0.19006612
Iteration 8, loss = 0.19231649
Iteration 9, loss = 0.19079484
Iteration 10, loss = 0.23323803
Iteration 11, loss = 0.19966445
Iteration 12, loss = 0.20539320
Iteration 13, loss = 0.20579494
Iteration 14, loss = 0.20402611
Iteration 15, loss = 0.21470374
Iteration 16, loss = 0.21699807
Iteration 17, loss = 0.20941575
Iteration 18, loss = 0.21931675
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.52297369
Iteration 2, loss = 0.30899750
Iteration 3, loss = 0.23273753
Iteration 4, loss = 0.20310150
Iteration 5, loss = 0.21022565
Iteration 6, loss = 0.19481987
Iteration 7, loss = 0.20110044
Iteration 8, loss = 0.23840168
Iteration 9, loss = 0.21453708
Iteration 10, loss = 0.18414950
Iteration 11, loss = 0.17692286
Iteration 12, loss = 0.16406423
Iteration 13, loss = 0.16732817
Iteration 14, loss = 0.24357658
Iteration 15, loss = 0.23499016
Iteration 16, loss = 0.24256582
Iteration 17, loss = 0.23750412
Iteration 18, loss = 0.23548690
Iteration 19, loss = 0.22891004
Iteration 20, loss = 0.24347813
Iteration 21, loss = 0.22169586
Iteration 22, loss = 0.19061033
Iteration 23, loss = 0.20804417
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.52566192
Iteration 2, loss = 0.27860317
Iteration 3, loss = 0.19858865
Iteration 4, loss = 0.17493684
Iteration 5, loss = 0.16538324
Iteration 6, loss = 0.16519372
Iteration 7, loss = 0.17373430
Iteration 8, loss = 0.17156077
Iteration 9, loss = 0.17483786
Iteration 10, loss = 0.16959105
Iteration 11, loss = 0.16950286
Iteration 12, loss = 0.17967144
Iteration 13, loss = 0.18730877
Iteration 14, loss = 0.17897992
Iteration 15, loss = 0.17539391
Iteration 16, loss = 0.20064930
Iteration 17, loss = 0.18469770
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.50127422
Iteration 2, loss = 0.27647485
Iteration 3, loss = 0.19525953
Iteration 4, loss = 0.18592337
Iteration 5, loss = 0.18742299
Iteration 6, loss = 0.16677639
Iteration 7, loss = 0.17370958
Iteration 8, loss = 0.19842524
Iteration 9, loss = 0.19772754
Iteration 10, loss = 0.21165053
Iteration 11, loss = 0.20273299
Iteration 12, loss = 0.20142243
Iteration 13, loss = 0.20076765
Iteration 14, loss = 0.19489868
Iteration 15, loss = 0.19645254
Iteration 16, loss = 0.20029504
Iteration 17, loss = 0.19004715
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.52158555
Iteration 2, loss = 0.29663376
Iteration 3, loss = 0.20887202
Iteration 4, loss = 0.20305810
Iteration 5, loss = 0.20293777
Iteration 6, loss = 0.20556363
Iteration 7, loss = 0.19161904
Iteration 8, loss = 0.20735397
Iteration 9, loss = 0.18831908
Iteration 10, loss = 0.18649790
Iteration 11, loss = 0.16716310
Iteration 12, loss = 0.17667732
Iteration 13, loss = 0.19681444
Iteration 14, loss = 0.19809207
Iteration 15, loss = 0.18011190
Iteration 16, loss = 0.16696892
Iteration 17, loss = 0.19823376
Iteration 18, loss = 0.20955470
Iteration 19, loss = 0.20859065
Iteration 20, loss = 0.24007066
Iteration 21, loss = 0.24069641
Iteration 22, loss = 0.22685176
Iteration 23, loss = 0.23111245
Iteration 24, loss = 0.23612112
Iteration 25, loss = 0.22545716
Iteration 26, loss = 0.20756449
Iteration 27, loss = 0.22223047
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.32462406
Iteration 2, loss = 0.19347813
Iteration 3, loss = 0.17598840
Iteration 4, loss = 0.17004864
Iteration 5, loss = 0.16403090
Iteration 6, loss = 0.17378897
Iteration 7, loss = 0.16357900
Iteration 8, loss = 0.16555579
Iteration 9, loss = 0.16581546
Iteration 10, loss = 0.14368185
Iteration 11, loss = 0.13223377
Iteration 12, loss = 0.14450946
Iteration 13, loss = 0.13944345
Iteration 14, loss = 0.13779033
Iteration 15, loss = 0.15256975
Iteration 16, loss = 0.15414145
Iteration 17, loss = 0.14782000
Iteration 18, loss = 0.14297699
Iteration 19, loss = 0.14226449
Iteration 20, loss = 0.14081394
Iteration 21, loss = 0.14300436
Iteration 22, loss = 0.14030872
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.29262940
Iteration 2, loss = 0.17953674
Iteration 3, loss = 0.16933452
Iteration 4, loss = 0.17104351
Iteration 5, loss = 0.17332717
Iteration 6, loss = 0.14872643
Iteration 7, loss = 0.17776697
Iteration 8, loss = 0.16763022
Iteration 9, loss = 0.18601517
Iteration 10, loss = 0.18236807
Iteration 11, loss = 0.17888837
Iteration 12, loss = 0.18564641
Iteration 13, loss = 0.18765433
Iteration 14, loss = 0.19060491
Iteration 15, loss = 0.18047642
Iteration 16, loss = 0.18744293
Iteration 17, loss = 0.18870721
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.29338261
Iteration 2, loss = 0.18401277
Iteration 3, loss = 0.17563204
Iteration 4, loss = 0.15764623
Iteration 5, loss = 0.14404851
Iteration 6, loss = 0.14698703
Iteration 7, loss = 0.14463354
Iteration 8, loss = 0.14159644
Iteration 9, loss = 0.14757994
Iteration 10, loss = 0.15570095
Iteration 11, loss = 0.14269084
Iteration 12, loss = 0.13589945
Iteration 13, loss = 0.13564599
Iteration 14, loss = 0.13377864
Iteration 15, loss = 0.13985368
Iteration 16, loss = 0.13730475
Iteration 17, loss = 0.17494340
Iteration 18, loss = 0.18638981
Iteration 19, loss = 0.18313920
Iteration 20, loss = 0.17839266
Iteration 21, loss = 0.17313369
Iteration 22, loss = 0.16560421
Iteration 23, loss = 0.16010337
Iteration 24, loss = 0.14874612
Iteration 25, loss = 0.14819098
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.31037736
Iteration 2, loss = 0.18291506
Iteration 3, loss = 0.17896481
Iteration 4, loss = 0.16375472
Iteration 5, loss = 0.15879964
Iteration 6, loss = 0.15067179
Iteration 7, loss = 0.15209320
Iteration 8, loss = 0.15149912
Iteration 9, loss = 0.14823409
Iteration 10, loss = 0.14008044
Iteration 11, loss = 0.13588066
Iteration 12, loss = 0.14721901
Iteration 13, loss = 0.16535168
Iteration 14, loss = 0.15647639
Iteration 15, loss = 0.12970940
Iteration 16, loss = 0.16524539
Iteration 17, loss = 0.15394118
Iteration 18, loss = 0.14820547
Iteration 19, loss = 0.14756488
Iteration 20, loss = 0.15091656
Iteration 21, loss = 0.14963214
Iteration 22, loss = 0.16299741
Iteration 23, loss = 0.15610262
Iteration 24, loss = 0.18005323
Iteration 25, loss = 0.16680290
Iteration 26, loss = 0.16406837
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.32833362
Iteration 2, loss = 0.20951791
Iteration 3, loss = 0.17852855
Iteration 4, loss = 0.18384809
Iteration 5, loss = 0.16476850
Iteration 6, loss = 0.16849134
Iteration 7, loss = 0.16262306
Iteration 8, loss = 0.15250736
Iteration 9, loss = 0.14964100
Iteration 10, loss = 0.15081048
Iteration 11, loss = 0.14979103
Iteration 12, loss = 0.15066744
Iteration 13, loss = 0.16545035
Iteration 14, loss = 0.15986235
Iteration 15, loss = 0.17703868
Iteration 16, loss = 0.17954319
Iteration 17, loss = 0.18226546
Iteration 18, loss = 0.18220260
Iteration 19, loss = 0.17626678
Iteration 20, loss = 0.17656723
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 15.05728861
Iteration 2, loss = 8.64946734
Iteration 3, loss = 7.77860708
Iteration 4, loss = 5.80072775
Iteration 5, loss = 4.12460511
Iteration 6, loss = 4.17639039
Iteration 7, loss = 3.63193386
Iteration 8, loss = 3.68488326
Iteration 9, loss = 3.21592846
Iteration 10, loss = 3.57062313
Iteration 11, loss = 3.14641569
Iteration 12, loss = 3.39386738
Iteration 13, loss = 2.71808046
Iteration 14, loss = 3.98694777
Iteration 15, loss = 4.41558714
Iteration 16, loss = 2.90669909
Iteration 17, loss = 3.67925322
Iteration 18, loss = 3.03745316
Iteration 19, loss = 3.36480392
Iteration 20, loss = 3.49962451
Iteration 21, loss = 3.15028517
Iteration 22, loss = 2.80090112
Iteration 23, loss = 2.78175806
Iteration 24, loss = 2.27559845
Iteration 25, loss = 3.01534632
Iteration 26, loss = 2.65366903
Iteration 27, loss = 3.63271263
Iteration 28, loss = 2.58412751
Iteration 29, loss = 3.25346789
Iteration 30, loss = 2.94914494
Iteration 31, loss = 2.40664168
Iteration 32, loss = 3.41074443
Iteration 33, loss = 2.96005450
Iteration 34, loss = 3.39501406
Iteration 35, loss = 2.53079415
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 19.09758030
Iteration 2, loss = 7.50423327
Iteration 3, loss = 4.92284182
Iteration 4, loss = 3.88194908
Iteration 5, loss = 4.04933176
Iteration 6, loss = 2.99692007
Iteration 7, loss = 3.38646992
Iteration 8, loss = 3.81344428
Iteration 9, loss = 3.37717988
Iteration 10, loss = 2.94480865
Iteration 11, loss = 3.65584084
Iteration 12, loss = 3.30255649
Iteration 13, loss = 3.35183782
Iteration 14, loss = 3.10616350
Iteration 15, loss = 3.18804022
Iteration 16, loss = 3.63135219
Iteration 17, loss = 3.07328786
Iteration 18, loss = 2.98604131
Iteration 19, loss = 2.68002947
Iteration 20, loss = 3.09631353
Iteration 21, loss = 2.60741682
Iteration 22, loss = 3.18596408
Iteration 23, loss = 3.04646234
Iteration 24, loss = 3.60105635
Iteration 25, loss = 3.14050252
Iteration 26, loss = 3.28942298
Iteration 27, loss = 2.56337319
Iteration 28, loss = 2.78274556
Iteration 29, loss = 2.89467585
Iteration 30, loss = 3.42729272
Iteration 31, loss = 3.56308018
Iteration 32, loss = 2.88967259
Iteration 33, loss = 2.72590021
Iteration 34, loss = 3.09213600
Iteration 35, loss = 3.24555927
Iteration 36, loss = 2.92664238
Iteration 37, loss = 3.24514410
Iteration 38, loss = 2.89077490
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 11.56364387
Iteration 2, loss = 6.43587088
Iteration 3, loss = 5.32074517
Iteration 4, loss = 4.47316537
Iteration 5, loss = 4.08469293
Iteration 6, loss = 3.72283967
Iteration 7, loss = 3.82420945
Iteration 8, loss = 3.53929411
Iteration 9, loss = 3.37316302
Iteration 10, loss = 3.45591334
Iteration 11, loss = 3.67159868
Iteration 12, loss = 3.51813383
Iteration 13, loss = 3.33480561
Iteration 14, loss = 3.14060419
Iteration 15, loss = 3.35450488
Iteration 16, loss = 3.17809488
Iteration 17, loss = 3.44869257
Iteration 18, loss = 3.25716187
Iteration 19, loss = 3.06079254
Iteration 20, loss = 3.62069439
Iteration 21, loss = 2.90150387
Iteration 22, loss = 3.16962947
Iteration 23, loss = 2.80441759
Iteration 24, loss = 3.42876603
Iteration 25, loss = 3.55972502
Iteration 26, loss = 2.97917013
Iteration 27, loss = 2.80770808
Iteration 28, loss = 3.30026114
Iteration 29, loss = 2.62308165
Iteration 30, loss = 2.96342757
Iteration 31, loss = 3.13126939
Iteration 32, loss = 2.68860520
Iteration 33, loss = 3.03291442
Iteration 34, loss = 3.05072386
Iteration 35, loss = 2.48488511
Iteration 36, loss = 3.52540543
Iteration 37, loss = 2.89731811
Iteration 38, loss = 2.73273688
Iteration 39, loss = 3.61593472
Iteration 40, loss = 2.85797525
Iteration 41, loss = 2.87642806
Iteration 42, loss = 2.45429817
Iteration 43, loss = 2.13367257
Iteration 44, loss = 2.93202229
Iteration 45, loss = 3.35291079
Iteration 46, loss = 2.77297899
Iteration 47, loss = 3.21196279
Iteration 48, loss = 2.62911849
Iteration 49, loss = 2.82352427
Iteration 50, loss = 2.94625563
Iteration 51, loss = 2.91755613
Iteration 52, loss = 2.23358980
Iteration 53, loss = 2.70343031
Iteration 54, loss = 3.20241113
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 10.41689393
Iteration 2, loss = 7.29054581
Iteration 3, loss = 5.65490800
Iteration 4, loss = 5.90557403
Iteration 5, loss = 6.01403979
Iteration 6, loss = 5.82556028
Iteration 7, loss = 5.62360111
Iteration 8, loss = 6.35391270
Iteration 9, loss = 5.83425848
Iteration 10, loss = 3.22703369
Iteration 11, loss = 3.82367699
Iteration 12, loss = 3.82492058
Iteration 13, loss = 3.39214600
Iteration 14, loss = 3.33000195
Iteration 15, loss = 3.38098746
Iteration 16, loss = 3.66135568
Iteration 17, loss = 3.43509238
Iteration 18, loss = 3.03233073
Iteration 19, loss = 3.23907400
Iteration 20, loss = 3.31090767
Iteration 21, loss = 3.46503620
Iteration 22, loss = 2.99715759
Iteration 23, loss = 3.14946006
Iteration 24, loss = 2.42971391
Iteration 25, loss = 3.00193885
Iteration 26, loss = 3.05599357
Iteration 27, loss = 2.89253718
Iteration 28, loss = 3.38925392
Iteration 29, loss = 2.99240637
Iteration 30, loss = 3.03912920
Iteration 31, loss = 2.98794689
Iteration 32, loss = 3.08129678
Iteration 33, loss = 2.50729516
Iteration 34, loss = 3.11702996
Iteration 35, loss = 2.86973855
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 14.01240072
Iteration 2, loss = 10.22355026
Iteration 3, loss = 8.48778570
Iteration 4, loss = 5.96263722
Iteration 5, loss = 4.54819066
Iteration 6, loss = 4.64496167
Iteration 7, loss = 4.58485022
Iteration 8, loss = 3.85753924
Iteration 9, loss = 3.75600751
Iteration 10, loss = 2.87962704
Iteration 11, loss = 3.29208171
Iteration 12, loss = 3.11430997
Iteration 13, loss = 2.98033733
Iteration 14, loss = 3.19777809
Iteration 15, loss = 2.30100369
Iteration 16, loss = 3.36806592
Iteration 17, loss = 3.09855230
Iteration 18, loss = 2.95054813
Iteration 19, loss = 2.80749034
Iteration 20, loss = 2.79781539
Iteration 21, loss = 2.43483374
Iteration 22, loss = 2.83460019
Iteration 23, loss = 2.82674922
Iteration 24, loss = 3.10892795
Iteration 25, loss = 2.57855996
Iteration 26, loss = 2.89527037
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.71241398
Iteration 2, loss = 0.65974106
Iteration 3, loss = 0.63573954
Iteration 4, loss = 0.61942302
Iteration 5, loss = 0.59593193
Iteration 6, loss = 0.58194426
Iteration 7, loss = 0.54974867
Iteration 8, loss = 0.54729977
Iteration 9, loss = 0.56949533
Iteration 10, loss = 0.55590998
Iteration 11, loss = 0.53666071
Iteration 12, loss = 0.49200381
Iteration 13, loss = 0.47176155
Iteration 14, loss = 0.49984763
Iteration 15, loss = 0.49794776
Iteration 16, loss = 0.48211399
Iteration 17, loss = 0.47153566
Iteration 18, loss = 0.45783758
Iteration 19, loss = 0.45181904
Iteration 20, loss = 0.44210046
Iteration 21, loss = 0.43144415
Iteration 22, loss = 0.42169179
Iteration 23, loss = 0.43726834
Iteration 24, loss = 0.42144226
Iteration 25, loss = 0.41387872
Iteration 26, loss = 0.40747905
Iteration 27, loss = 0.40212188
Iteration 28, loss = 0.39691499
Iteration 29, loss = 0.39183592
Iteration 30, loss = 0.38691245
Iteration 31, loss = 0.38231089
Iteration 32, loss = 0.37791578
Iteration 33, loss = 0.37411730
Iteration 34, loss = 0.37042721
Iteration 35, loss = 0.36569130
Iteration 36, loss = 0.36125085
Iteration 37, loss = 0.36031579
Iteration 38, loss = 0.35558178
Iteration 39, loss = 0.35608730
Iteration 40, loss = 0.35647597
Iteration 41, loss = 0.34713643
Iteration 42, loss = 0.34958167
Iteration 43, loss = 0.34734136
Iteration 44, loss = 0.34555969
Iteration 45, loss = 0.34324754
Iteration 46, loss = 0.33990252
Iteration 47, loss = 0.37434322
Iteration 48, loss = 0.55944685
Iteration 49, loss = 0.63534642
Iteration 50, loss = 0.57197137
Iteration 51, loss = 0.45817016
Iteration 52, loss = 0.44800700
Iteration 53, loss = 0.44362306
Iteration 54, loss = 0.43931206
Iteration 55, loss = 0.43508818
Iteration 56, loss = 0.43141723
Iteration 57, loss = 0.42750065
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.70066263
Iteration 2, loss = 0.66921422
Iteration 3, loss = 0.62598608
Iteration 4, loss = 0.58300216
Iteration 5, loss = 0.55810425
Iteration 6, loss = 0.54103393
Iteration 7, loss = 0.52827322
Iteration 8, loss = 0.51349335
Iteration 9, loss = 0.50133661
Iteration 10, loss = 0.48998606
Iteration 11, loss = 0.47689395
Iteration 12, loss = 0.44838203
Iteration 13, loss = 0.43331822
Iteration 14, loss = 0.42022359
Iteration 15, loss = 0.48686177
Iteration 16, loss = 0.49897461
Iteration 17, loss = 0.54115439
Iteration 18, loss = 0.50485413
Iteration 19, loss = 0.47255456
Iteration 20, loss = 0.45205805
Iteration 21, loss = 0.44020027
Iteration 22, loss = 0.43035978
Iteration 23, loss = 0.42482780
Iteration 24, loss = 0.41593179
Iteration 25, loss = 0.40877759
Iteration 26, loss = 0.40961505
Iteration 27, loss = 0.40009381
Iteration 28, loss = 0.39218869
Iteration 29, loss = 0.37622358
Iteration 30, loss = 0.41396658
Iteration 31, loss = 0.46496986
Iteration 32, loss = 0.44794297
Iteration 33, loss = 0.43627561
Iteration 34, loss = 0.42956023
Iteration 35, loss = 0.42449080
Iteration 36, loss = 0.42158403
Iteration 37, loss = 0.42631570
Iteration 38, loss = 0.42315315
Iteration 39, loss = 0.41740299
Iteration 40, loss = 0.40850648
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.82998370
Iteration 2, loss = 0.77769332
Iteration 3, loss = 0.75584942
Iteration 4, loss = 0.72302411
Iteration 5, loss = 0.69270472
Iteration 6, loss = 0.66441967
Iteration 7, loss = 0.64304830
Iteration 8, loss = 0.62387339
Iteration 9, loss = 0.60676795
Iteration 10, loss = 0.59113712
Iteration 11, loss = 0.59042222
Iteration 12, loss = 0.58486077
Iteration 13, loss = 0.56155191
Iteration 14, loss = 0.53154363
Iteration 15, loss = 0.52074400
Iteration 16, loss = 0.51068724
Iteration 17, loss = 0.50205364
Iteration 18, loss = 0.50399754
Iteration 19, loss = 0.51285515
Iteration 20, loss = 0.50579560
Iteration 21, loss = 0.50062015
Iteration 22, loss = 0.49561335
Iteration 23, loss = 0.48734487
Iteration 24, loss = 0.48117985
Iteration 25, loss = 0.47552207
Iteration 26, loss = 0.48152658
Iteration 27, loss = 0.60763401
Iteration 28, loss = 0.59091435
Iteration 29, loss = 0.58141588
Iteration 30, loss = 0.57251127
Iteration 31, loss = 0.56932540
Iteration 32, loss = 0.57264434
Iteration 33, loss = 0.55435824
Iteration 34, loss = 0.53749429
Iteration 35, loss = 0.52866361
Iteration 36, loss = 0.52343206
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.67741968
Iteration 2, loss = 0.64271463
Iteration 3, loss = 0.61461211
Iteration 4, loss = 0.58733438
Iteration 5, loss = 0.54043759
Iteration 6, loss = 0.49318118
Iteration 7, loss = 0.45753548
Iteration 8, loss = 0.46225128
Iteration 9, loss = 0.44777305
Iteration 10, loss = 0.43250612
Iteration 11, loss = 0.46650523
Iteration 12, loss = 0.45401804
Iteration 13, loss = 0.45597455
Iteration 14, loss = 0.45780771
Iteration 15, loss = 0.49499601
Iteration 16, loss = 0.49440925
Iteration 17, loss = 0.48499206
Iteration 18, loss = 0.47545712
Iteration 19, loss = 0.44203369
Iteration 20, loss = 0.42729126
Iteration 21, loss = 0.42352933
Iteration 22, loss = 0.41266014
Iteration 23, loss = 0.40510584
Iteration 24, loss = 0.39718913
Iteration 25, loss = 0.39173839
Iteration 26, loss = 0.40702650
Iteration 27, loss = 0.40349777
Iteration 28, loss = 0.39810956
Iteration 29, loss = 0.39587374
Iteration 30, loss = 0.38152749
Iteration 31, loss = 0.36371756
Iteration 32, loss = 0.35925310
Iteration 33, loss = 0.36044066
Iteration 34, loss = 0.35739985
Iteration 35, loss = 0.36195041
Iteration 36, loss = 0.37415460
Iteration 37, loss = 0.36996540
Iteration 38, loss = 0.37830214
Iteration 39, loss = 0.37655558
Iteration 40, loss = 0.37361756
Iteration 41, loss = 0.37056557
Iteration 42, loss = 0.36657837
Iteration 43, loss = 0.36104963
Iteration 44, loss = 0.35660119
Iteration 45, loss = 0.35413950
Iteration 46, loss = 0.35177983
Iteration 47, loss = 0.34945510
Iteration 48, loss = 0.34718491
Iteration 49, loss = 0.34498329
Iteration 50, loss = 0.34282236
Iteration 51, loss = 0.34019607
Iteration 52, loss = 0.33800821
Iteration 53, loss = 0.33585096
Iteration 54, loss = 0.33359111
Iteration 55, loss = 0.33153694
Iteration 56, loss = 0.32965083
Iteration 57, loss = 0.32771445
Iteration 58, loss = 0.32576429
Iteration 59, loss = 0.32388601
Iteration 60, loss = 0.32220303
Iteration 61, loss = 0.32054460
Iteration 62, loss = 0.31751484
Iteration 63, loss = 0.31614241
Iteration 64, loss = 0.31525088
Iteration 65, loss = 0.31345035
Iteration 66, loss = 0.30904298
Iteration 67, loss = 0.31256538
Iteration 68, loss = 0.31189058
Iteration 69, loss = 0.30997620
Iteration 70, loss = 0.30789199
Iteration 71, loss = 0.30668911
Iteration 72, loss = 0.30554898
Iteration 73, loss = 0.30431253
Iteration 74, loss = 0.30310646
Iteration 75, loss = 0.30194777
Iteration 76, loss = 0.30086012
Iteration 77, loss = 0.29971401
Iteration 78, loss = 0.29864745
Iteration 79, loss = 0.29759143
Iteration 80, loss = 0.30476533
Iteration 81, loss = 0.29243640
Iteration 82, loss = 0.28959978
Iteration 83, loss = 0.31849761
Iteration 84, loss = 0.31818623
Iteration 85, loss = 0.31613168
Iteration 86, loss = 0.31032859
Iteration 87, loss = 0.28359059
Iteration 88, loss = 0.27779660
Iteration 89, loss = 0.27678431
Iteration 90, loss = 0.27708886
Iteration 91, loss = 0.29409312
Iteration 92, loss = 0.29343543
Iteration 93, loss = 0.29229995
Iteration 94, loss = 0.29155819
Iteration 95, loss = 0.29073889
Iteration 96, loss = 0.28980192
Iteration 97, loss = 0.28930884
Iteration 98, loss = 0.28865743
Iteration 99, loss = 0.28803685
Iteration 100, loss = 0.28739928
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.81635944
Iteration 2, loss = 0.76272627
Iteration 3, loss = 0.71691960
Iteration 4, loss = 0.70607987
Iteration 5, loss = 0.68334346
Iteration 6, loss = 0.65336483
Iteration 7, loss = 0.62697584
Iteration 8, loss = 0.60307354
Iteration 9, loss = 0.58077884
Iteration 10, loss = 0.56458769
Iteration 11, loss = 0.54569467
Iteration 12, loss = 0.57633058
Iteration 13, loss = 0.57365550
Iteration 14, loss = 0.56137120
Iteration 15, loss = 0.54956149
Iteration 16, loss = 0.53912520
Iteration 17, loss = 0.52956457
Iteration 18, loss = 0.52048090
Iteration 19, loss = 0.51240522
Iteration 20, loss = 0.50634362
Iteration 21, loss = 0.50035773
Iteration 22, loss = 0.49426187
Iteration 23, loss = 0.48765238
Iteration 24, loss = 0.48238709
Iteration 25, loss = 0.47605363
Iteration 26, loss = 0.47001334
Iteration 27, loss = 0.46455122
Iteration 28, loss = 0.46074834
Iteration 29, loss = 0.45523420
Iteration 30, loss = 0.45097169
Iteration 31, loss = 0.44648785
Iteration 32, loss = 0.44285282
Iteration 33, loss = 0.43831257
Iteration 34, loss = 0.43585300
Iteration 35, loss = 0.43316663
Iteration 36, loss = 0.43030007
Iteration 37, loss = 0.42708849
Iteration 38, loss = 0.43186982
Iteration 39, loss = 0.42989130
Iteration 40, loss = 0.42662024
Iteration 41, loss = 0.42397671
Iteration 42, loss = 0.42166159
Iteration 43, loss = 0.41941398
Iteration 44, loss = 0.41345973
Iteration 45, loss = 0.42045559
Iteration 46, loss = 0.47662983
Iteration 47, loss = 0.43036579
Iteration 48, loss = 0.42676078
Iteration 49, loss = 0.42121870
Iteration 50, loss = 0.41917340
Iteration 51, loss = 0.41717809
Iteration 52, loss = 0.41545712
Iteration 53, loss = 0.41386393
Iteration 54, loss = 0.41239723
Iteration 55, loss = 0.41093112
Iteration 56, loss = 0.40961657
Iteration 57, loss = 0.40834246
Iteration 58, loss = 0.40712313
Iteration 59, loss = 0.40598490
Iteration 60, loss = 0.40490243
Iteration 61, loss = 0.40387621
Iteration 62, loss = 0.40289573
Iteration 63, loss = 0.40199522
Iteration 64, loss = 0.40108032
Iteration 65, loss = 0.40024408
Iteration 66, loss = 0.39944073
Iteration 67, loss = 0.39866536
Iteration 68, loss = 0.39807854
Iteration 69, loss = 0.39736413
Iteration 70, loss = 0.39671538
Iteration 71, loss = 0.39599281
Iteration 72, loss = 0.39551117
Iteration 73, loss = 0.39491967
Iteration 74, loss = 0.39439005
Iteration 75, loss = 0.39213080
Iteration 76, loss = 0.39052684
Iteration 77, loss = 0.39000665
Iteration 78, loss = 0.38940444
Iteration 79, loss = 0.38884157
Iteration 80, loss = 0.38836762
Iteration 81, loss = 0.38792959
Iteration 82, loss = 0.38746451
Iteration 83, loss = 0.38703237
Iteration 84, loss = 0.38669355
Iteration 85, loss = 0.38630607
Iteration 86, loss = 0.38610202
Iteration 87, loss = 0.38479121
Iteration 88, loss = 0.38143170
Iteration 89, loss = 0.38083267
Iteration 90, loss = 0.38035475
Iteration 91, loss = 0.37948962
Iteration 92, loss = 0.37929004
Iteration 93, loss = 0.40290317
Iteration 94, loss = 0.42955043
Iteration 95, loss = 0.42069936
Iteration 96, loss = 0.41504930
Iteration 97, loss = 0.35777673
Iteration 98, loss = 0.32027115
Iteration 99, loss = 0.31364152
Iteration 100, loss = 0.28763785
Iteration 101, loss = 0.28190707
Iteration 102, loss = 0.27979951
Iteration 103, loss = 0.31047722
Iteration 104, loss = 0.40055136
Iteration 105, loss = 0.44581836
Iteration 106, loss = 0.42775376
Iteration 107, loss = 0.30699618
Iteration 108, loss = 0.28500331
Iteration 109, loss = 0.28087670
Iteration 110, loss = 0.27841367
Iteration 111, loss = 0.27557069
Iteration 112, loss = 0.32565933
Iteration 113, loss = 0.36128400
Iteration 114, loss = 0.34628226
Iteration 115, loss = 0.33752681
Iteration 116, loss = 0.32825080
Iteration 117, loss = 0.32128951
Iteration 118, loss = 0.31341039
Iteration 119, loss = 0.30721401
Iteration 120, loss = 0.30253769
Iteration 121, loss = 0.29659781
Iteration 122, loss = 0.29335318
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 14.24437814
Iteration 2, loss = 6.19206173
Iteration 3, loss = 4.41847785
Iteration 4, loss = 4.11503740
Iteration 5, loss = 3.86441629
Iteration 6, loss = 4.14187493
Iteration 7, loss = 4.27058015
Iteration 8, loss = 3.56342017
Iteration 9, loss = 3.92820903
Iteration 10, loss = 3.30612935
Iteration 11, loss = 3.92000890
Iteration 12, loss = 4.41414764
Iteration 13, loss = 4.42615640
Iteration 14, loss = 3.88638344
Iteration 15, loss = 4.26472830
Iteration 16, loss = 3.79729015
Iteration 17, loss = 3.61682241
Iteration 18, loss = 3.80556390
Iteration 19, loss = 3.92338776
Iteration 20, loss = 3.81475551
Iteration 21, loss = 3.35906701
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 17.57720539
Iteration 2, loss = 5.63120331
Iteration 3, loss = 4.30272803
Iteration 4, loss = 4.19369733
Iteration 5, loss = 3.38494495
Iteration 6, loss = 3.44444085
Iteration 7, loss = 3.28886004
Iteration 8, loss = 3.41063957
Iteration 9, loss = 3.46754175
Iteration 10, loss = 3.41291419
Iteration 11, loss = 2.85093842
Iteration 12, loss = 3.27981948
Iteration 13, loss = 3.23347921
Iteration 14, loss = 3.55190992
Iteration 15, loss = 3.41363714
Iteration 16, loss = 2.83039719
Iteration 17, loss = 3.53658278
Iteration 18, loss = 3.13225989
Iteration 19, loss = 3.32867705
Iteration 20, loss = 3.04690735
Iteration 21, loss = 2.40784672
Iteration 22, loss = 2.97816330
Iteration 23, loss = 3.62820215
Iteration 24, loss = 3.30564857
Iteration 25, loss = 2.82081936
Iteration 26, loss = 2.37246938
Iteration 27, loss = 3.77972330
Iteration 28, loss = 3.06844203
Iteration 29, loss = 2.69122263
Iteration 30, loss = 3.01919873
Iteration 31, loss = 2.49871235
Iteration 32, loss = 2.75497276
Iteration 33, loss = 2.37753953
Iteration 34, loss = 3.25929854
Iteration 35, loss = 2.78827990
Iteration 36, loss = 2.47652125
Iteration 37, loss = 2.01918906
Iteration 38, loss = 2.84381511
Iteration 39, loss = 2.94905403
Iteration 40, loss = 3.07741000
Iteration 41, loss = 2.30335516
Iteration 42, loss = 2.23988733
Iteration 43, loss = 2.67438028
Iteration 44, loss = 2.65005132
Iteration 45, loss = 2.90404726
Iteration 46, loss = 3.11363827
Iteration 47, loss = 3.22059089
Iteration 48, loss = 2.93873513
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 11.82865720
Iteration 2, loss = 8.39398632
Iteration 3, loss = 4.69674418
Iteration 4, loss = 4.56919823
Iteration 5, loss = 3.94426995
Iteration 6, loss = 3.84550507
Iteration 7, loss = 4.40953564
Iteration 8, loss = 3.31129572
Iteration 9, loss = 3.43398146
Iteration 10, loss = 3.61567325
Iteration 11, loss = 3.22646610
Iteration 12, loss = 2.78025471
Iteration 13, loss = 3.65155015
Iteration 14, loss = 3.88254740
Iteration 15, loss = 3.23001159
Iteration 16, loss = 3.46759074
Iteration 17, loss = 2.82409299
Iteration 18, loss = 3.28574621
Iteration 19, loss = 3.37808698
Iteration 20, loss = 3.46838644
Iteration 21, loss = 2.39775577
Iteration 22, loss = 3.13920309
Iteration 23, loss = 2.73334543
Iteration 24, loss = 3.16633015
Iteration 25, loss = 2.65003420
Iteration 26, loss = 3.25157423
Iteration 27, loss = 2.99223187
Iteration 28, loss = 3.32680949
Iteration 29, loss = 3.42782919
Iteration 30, loss = 2.86563763
Iteration 31, loss = 3.26850853
Iteration 32, loss = 2.55018711
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 12.17170401
Iteration 2, loss = 8.42535259
Iteration 3, loss = 6.46873759
Iteration 4, loss = 5.92473312
Iteration 5, loss = 4.54823630
Iteration 6, loss = 3.19671918
Iteration 7, loss = 3.21676569
Iteration 8, loss = 3.45767364
Iteration 9, loss = 3.29713875
Iteration 10, loss = 3.35729695
Iteration 11, loss = 3.14254388
Iteration 12, loss = 3.20607285
Iteration 13, loss = 3.11330069
Iteration 14, loss = 3.06346087
Iteration 15, loss = 4.10460438
Iteration 16, loss = 2.98454518
Iteration 17, loss = 2.89126178
Iteration 18, loss = 3.08218280
Iteration 19, loss = 3.20222592
Iteration 20, loss = 3.00395474
Iteration 21, loss = 2.94041159
Iteration 22, loss = 2.86152588
Iteration 23, loss = 3.41670432
Iteration 24, loss = 3.14754579
Iteration 25, loss = 2.65537224
Iteration 26, loss = 2.40838547
Iteration 27, loss = 2.40358672
Iteration 28, loss = 3.48681254
Iteration 29, loss = 2.47038925
Iteration 30, loss = 2.12755493
Iteration 31, loss = 2.36466991
Iteration 32, loss = 3.08620938
Iteration 33, loss = 2.49551915
Iteration 34, loss = 2.47266156
Iteration 35, loss = 3.07102820
Iteration 36, loss = 2.95968980
Iteration 37, loss = 2.79917191
Iteration 38, loss = 2.92295421
Iteration 39, loss = 3.25565889
Iteration 40, loss = 2.77250884
Iteration 41, loss = 2.63352210
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 24.20941687
Iteration 2, loss = 14.12903789
Iteration 3, loss = 12.60852254
Iteration 4, loss = 7.18668981
Iteration 5, loss = 4.32037281
Iteration 6, loss = 4.78770212
Iteration 7, loss = 3.90968306
Iteration 8, loss = 3.02874070
Iteration 9, loss = 2.73362775
Iteration 10, loss = 3.17981433
Iteration 11, loss = 3.11198052
Iteration 12, loss = 2.91808673
Iteration 13, loss = 3.07456435
Iteration 14, loss = 2.86294829
Iteration 15, loss = 3.37602715
Iteration 16, loss = 2.84136015
Iteration 17, loss = 2.89478543
Iteration 18, loss = 2.83908487
Iteration 19, loss = 2.46219390
Iteration 20, loss = 2.87787060
Iteration 21, loss = 2.42532956
Iteration 22, loss = 2.83226703
Iteration 23, loss = 2.68673395
Iteration 24, loss = 3.38792450
Iteration 25, loss = 3.15529796
Iteration 26, loss = 3.15585074
Iteration 27, loss = 2.94313467
Iteration 28, loss = 2.23793970
Iteration 29, loss = 2.75018847
Iteration 30, loss = 2.55129342
Iteration 31, loss = 2.88576542
Iteration 32, loss = 3.04873995
Iteration 33, loss = 3.23403673
Iteration 34, loss = 2.53632451
Iteration 35, loss = 2.53358217
Iteration 36, loss = 3.10544338
Iteration 37, loss = 2.89526316
Iteration 38, loss = 3.11824446
Iteration 39, loss = 2.82757980
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 10.52693714
Iteration 2, loss = 5.41987350
Iteration 3, loss = 3.99621556
Iteration 4, loss = 4.17908673
Iteration 5, loss = 3.50693349
Iteration 6, loss = 3.62522128
Iteration 7, loss = 4.22114034
Iteration 8, loss = 4.05031610
Iteration 9, loss = 3.83182574
Iteration 10, loss = 3.56944996
Iteration 11, loss = 2.90087838
Iteration 12, loss = 3.56970084
Iteration 13, loss = 3.33132155
Iteration 14, loss = 2.97384623
Iteration 15, loss = 3.83591267
Iteration 16, loss = 3.58677083
Iteration 17, loss = 3.34504869
Iteration 18, loss = 3.40505469
Iteration 19, loss = 2.74478944
Iteration 20, loss = 3.04844951
Iteration 21, loss = 3.18059980
Iteration 22, loss = 3.39840851
Iteration 23, loss = 2.98157987
Iteration 24, loss = 2.90953683
Iteration 25, loss = 3.04394820
Iteration 26, loss = 3.34548512
Iteration 27, loss = 3.31918881
Iteration 28, loss = 2.47829146
Iteration 29, loss = 3.09768390
Iteration 30, loss = 3.75077062
Iteration 31, loss = 2.35866647
Iteration 32, loss = 3.43702051
Iteration 33, loss = 2.32902273
Iteration 34, loss = 3.28596963
Iteration 35, loss = 2.75731378
Iteration 36, loss = 2.42696526
Iteration 37, loss = 2.89468213
Iteration 38, loss = 2.91158913
Iteration 39, loss = 3.03086460
Iteration 40, loss = 2.79133322
Iteration 41, loss = 2.97874041
Iteration 42, loss = 2.74721359
Iteration 43, loss = 3.08253421
Iteration 44, loss = 2.44560735
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 10.51109455
Iteration 2, loss = 7.34672538
Iteration 3, loss = 7.22376638
Iteration 4, loss = 6.28783106
Iteration 5, loss = 5.72874822
Iteration 6, loss = 4.82375027
Iteration 7, loss = 3.85342611
Iteration 8, loss = 4.12596080
Iteration 9, loss = 3.75699896
Iteration 10, loss = 3.51788006
Iteration 11, loss = 3.69676999
Iteration 12, loss = 3.36163899
Iteration 13, loss = 4.26152601
Iteration 14, loss = 3.44840777
Iteration 15, loss = 3.09994611
Iteration 16, loss = 3.25399208
Iteration 17, loss = 3.00212593
Iteration 18, loss = 3.81039578
Iteration 19, loss = 3.59458925
Iteration 20, loss = 2.93946431
Iteration 21, loss = 3.53878836
Iteration 22, loss = 3.50074780
Iteration 23, loss = 3.14830286
Iteration 24, loss = 3.19060591
Iteration 25, loss = 2.98179863
Iteration 26, loss = 2.99398752
Iteration 27, loss = 3.97373618
Iteration 28, loss = 3.30765911
Iteration 29, loss = 3.29566671
Iteration 30, loss = 3.70682232
Iteration 31, loss = 2.93164018
Iteration 32, loss = 3.13198946
Iteration 33, loss = 2.54985984
Iteration 34, loss = 2.30033141
Iteration 35, loss = 3.01180323
Iteration 36, loss = 2.38838903
Iteration 37, loss = 2.71045070
Iteration 38, loss = 2.01463201
Iteration 39, loss = 2.78898615
Iteration 40, loss = 2.29858592
Iteration 41, loss = 3.91950943
Iteration 42, loss = 3.06196285
Iteration 43, loss = 2.99471131
Iteration 44, loss = 2.76837607
Iteration 45, loss = 2.14710807
Iteration 46, loss = 2.79155394
Iteration 47, loss = 2.37929272
Iteration 48, loss = 2.16110683
Iteration 49, loss = 2.96453199
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 13.81447375
Iteration 2, loss = 10.90931557
Iteration 3, loss = 6.94800864
Iteration 4, loss = 5.85597692
Iteration 5, loss = 5.57307737
Iteration 6, loss = 4.66834589
Iteration 7, loss = 3.33223451
Iteration 8, loss = 3.96825166
Iteration 9, loss = 2.52773778
Iteration 10, loss = 2.97174737
Iteration 11, loss = 2.41482521
Iteration 12, loss = 2.28647082
Iteration 13, loss = 3.01345647
Iteration 14, loss = 2.29632455
Iteration 15, loss = 2.22523566
Iteration 16, loss = 2.50367615
Iteration 17, loss = 3.56239916
Iteration 18, loss = 2.98469688
Iteration 19, loss = 2.76279461
Iteration 20, loss = 3.06110353
Iteration 21, loss = 3.38970782
Iteration 22, loss = 3.50541994
Iteration 23, loss = 3.25426746
Iteration 24, loss = 2.73375693
Iteration 25, loss = 2.40872149
Iteration 26, loss = 2.36043407
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 9.33718795
Iteration 2, loss = 6.52398532
Iteration 3, loss = 5.61176643
Iteration 4, loss = 4.49976329
Iteration 5, loss = 4.08631459
Iteration 6, loss = 4.47661780
Iteration 7, loss = 3.98839810
Iteration 8, loss = 3.54967524
Iteration 9, loss = 3.37748410
Iteration 10, loss = 3.11096707
Iteration 11, loss = 3.50009499
Iteration 12, loss = 3.05951559
Iteration 13, loss = 2.90414900
Iteration 14, loss = 2.61551609
Iteration 15, loss = 2.54133286
Iteration 16, loss = 2.83860559
Iteration 17, loss = 2.91813248
Iteration 18, loss = 2.63208486
Iteration 19, loss = 2.85114535
Iteration 20, loss = 2.74448550
Iteration 21, loss = 2.81550587
Iteration 22, loss = 2.42742965
Iteration 23, loss = 2.47357908
Iteration 24, loss = 2.64870905
Iteration 25, loss = 2.74591209
Iteration 26, loss = 2.93205239
Iteration 27, loss = 2.60683857
Iteration 28, loss = 2.82309163
Iteration 29, loss = 2.65042486
Iteration 30, loss = 2.06909242
Iteration 31, loss = 2.73103987
Iteration 32, loss = 2.66992469
Iteration 33, loss = 2.53314513
Iteration 34, loss = 2.73049636
Iteration 35, loss = 2.98014566
Iteration 36, loss = 2.75957724
Iteration 37, loss = 2.34459065
Iteration 38, loss = 2.99531535
Iteration 39, loss = 2.37851676
Iteration 40, loss = 2.23599738
Iteration 41, loss = 2.28437852
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 11.27166281
Iteration 2, loss = 3.92497945
Iteration 3, loss = 3.38831736
Iteration 4, loss = 3.34437272
Iteration 5, loss = 2.99740486
Iteration 6, loss = 2.79302472
Iteration 7, loss = 3.94656686
Iteration 8, loss = 3.38454444
Iteration 9, loss = 3.29852835
Iteration 10, loss = 3.49290368
Iteration 11, loss = 3.79858058
Iteration 12, loss = 3.31251203
Iteration 13, loss = 3.36860159
Iteration 14, loss = 2.93459099
Iteration 15, loss = 3.28004659
Iteration 16, loss = 3.97128087
Iteration 17, loss = 2.94984283
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.79041525
Iteration 2, loss = 0.68587567
Iteration 3, loss = 0.61932600
Iteration 4, loss = 0.57570790
Iteration 5, loss = 0.54035753
Iteration 6, loss = 0.52720291
Iteration 7, loss = 0.52320394
Iteration 8, loss = 0.50418398
Iteration 9, loss = 0.49504764
Iteration 10, loss = 0.47996203
Iteration 11, loss = 0.46454772
Iteration 12, loss = 0.45430280
Iteration 13, loss = 0.43766871
Iteration 14, loss = 0.42253318
Iteration 15, loss = 0.41150295
Iteration 16, loss = 0.40279376
Iteration 17, loss = 0.39501648
Iteration 18, loss = 0.38915916
Iteration 19, loss = 0.39249743
Iteration 20, loss = 0.38289332
Iteration 21, loss = 0.38122645
Iteration 22, loss = 0.36896576
Iteration 23, loss = 0.36126991
Iteration 24, loss = 0.35516207
Iteration 25, loss = 0.34933981
Iteration 26, loss = 0.34394816
Iteration 27, loss = 0.33942594
Iteration 28, loss = 0.33515103
Iteration 29, loss = 0.33105283
Iteration 30, loss = 0.32716369
Iteration 31, loss = 0.32349244
Iteration 32, loss = 0.31993571
Iteration 33, loss = 0.31654527
Iteration 34, loss = 0.31332803
Iteration 35, loss = 0.31016291
Iteration 36, loss = 0.30719363
Iteration 37, loss = 0.30430985
Iteration 38, loss = 0.30158662
Iteration 39, loss = 0.30158791
Iteration 40, loss = 0.29621871
Iteration 41, loss = 0.28739087
Iteration 42, loss = 0.28035106
Iteration 43, loss = 0.27777737
Iteration 44, loss = 0.27292873
Iteration 45, loss = 0.27156180
Iteration 46, loss = 0.26884942
Iteration 47, loss = 0.26645792
Iteration 48, loss = 0.26235355
Iteration 49, loss = 0.24410361
Iteration 50, loss = 0.23915610
Iteration 51, loss = 0.23691915
Iteration 52, loss = 0.26440056
Iteration 53, loss = 0.31583306
Iteration 54, loss = 0.28649339
Iteration 55, loss = 0.27410338
Iteration 56, loss = 0.26851996
Iteration 57, loss = 0.26551491
Iteration 58, loss = 0.26329093
Iteration 59, loss = 0.26143570
Iteration 60, loss = 0.25965452
Iteration 61, loss = 0.25804799
Iteration 62, loss = 0.25642498
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.75275615
Iteration 2, loss = 0.67268543
Iteration 3, loss = 0.64434016
Iteration 4, loss = 0.61701991
Iteration 5, loss = 0.58597665
Iteration 6, loss = 0.56283786
Iteration 7, loss = 0.66634001
Iteration 8, loss = 0.67713884
Iteration 9, loss = 0.66261432
Iteration 10, loss = 0.64983576
Iteration 11, loss = 0.63779380
Iteration 12, loss = 0.62769015
Iteration 13, loss = 0.61276491
Iteration 14, loss = 0.59633276
Iteration 15, loss = 0.58046375
Iteration 16, loss = 0.57194614
Iteration 17, loss = 0.55077687
Iteration 18, loss = 0.53524985
Iteration 19, loss = 0.47837628
Iteration 20, loss = 0.50716113
Iteration 21, loss = 0.55683905
Iteration 22, loss = 0.55342801
Iteration 23, loss = 0.57781140
Iteration 24, loss = 0.57143603
Iteration 25, loss = 0.56076540
Iteration 26, loss = 0.67046550
Iteration 27, loss = 0.63908944
Iteration 28, loss = 0.61492525
Iteration 29, loss = 0.59899435
Iteration 30, loss = 0.57457247
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 1.23957977
Iteration 2, loss = 1.10683455
Iteration 3, loss = 0.98380802
Iteration 4, loss = 0.87987532
Iteration 5, loss = 0.80230898
Iteration 6, loss = 0.73840057
Iteration 7, loss = 0.67497108
Iteration 8, loss = 0.62342618
Iteration 9, loss = 0.59762449
Iteration 10, loss = 0.57691655
Iteration 11, loss = 0.55083703
Iteration 12, loss = 0.51907589
Iteration 13, loss = 0.49246642
Iteration 14, loss = 0.44509366
Iteration 15, loss = 0.42670045
Iteration 16, loss = 0.39871075
Iteration 17, loss = 0.38319110
Iteration 18, loss = 0.37242657
Iteration 19, loss = 0.36527849
Iteration 20, loss = 0.37438117
Iteration 21, loss = 0.36198520
Iteration 22, loss = 0.35245598
Iteration 23, loss = 0.34506867
Iteration 24, loss = 0.33880728
Iteration 25, loss = 0.33340337
Iteration 26, loss = 0.32853108
Iteration 27, loss = 0.32417281
Iteration 28, loss = 0.31823174
Iteration 29, loss = 0.31337349
Iteration 30, loss = 0.30970659
Iteration 31, loss = 0.30627673
Iteration 32, loss = 0.30305121
Iteration 33, loss = 0.30004695
Iteration 34, loss = 0.29722626
Iteration 35, loss = 0.29460345
Iteration 36, loss = 0.29210589
Iteration 37, loss = 0.28975730
Iteration 38, loss = 0.28754583
Iteration 39, loss = 0.28547165
Iteration 40, loss = 0.28349891
Iteration 41, loss = 0.28162871
Iteration 42, loss = 0.27985389
Iteration 43, loss = 0.27817393
Iteration 44, loss = 0.27658570
Iteration 45, loss = 0.27508008
Iteration 46, loss = 0.27367980
Iteration 47, loss = 0.27227596
Iteration 48, loss = 0.27096297
Iteration 49, loss = 0.26970757
Iteration 50, loss = 0.26850344
Iteration 51, loss = 0.26737399
Iteration 52, loss = 0.26628479
Iteration 53, loss = 0.26526477
Iteration 54, loss = 0.26425653
Iteration 55, loss = 0.26330571
Iteration 56, loss = 0.26240072
Iteration 57, loss = 0.26151748
Iteration 58, loss = 0.26066011
Iteration 59, loss = 0.25988037
Iteration 60, loss = 0.25910436
Iteration 61, loss = 0.25837044
Iteration 62, loss = 0.25761978
Iteration 63, loss = 0.25692745
Iteration 64, loss = 0.25629555
Iteration 65, loss = 0.25556492
Iteration 66, loss = 0.25484206
Iteration 67, loss = 0.25423947
Iteration 68, loss = 0.25365981
Iteration 69, loss = 0.25308550
Iteration 70, loss = 0.25256546
Iteration 71, loss = 0.25202694
Iteration 72, loss = 0.25153351
Iteration 73, loss = 0.25104224
Iteration 74, loss = 0.25054580
Iteration 75, loss = 0.25011075
Iteration 76, loss = 0.24967868
Iteration 77, loss = 0.24924959
Iteration 78, loss = 0.24882811
Iteration 79, loss = 0.24843682
Iteration 80, loss = 0.24807503
Iteration 81, loss = 0.24769421
Iteration 82, loss = 0.24740028
Iteration 83, loss = 0.24698918
Iteration 84, loss = 0.24665862
Iteration 85, loss = 0.24634319
Iteration 86, loss = 0.24597843
Iteration 87, loss = 0.24567584
Iteration 88, loss = 0.24539905
Iteration 89, loss = 0.24506530
Iteration 90, loss = 0.24482273
Iteration 91, loss = 0.24452537
Iteration 92, loss = 0.24428524
Iteration 93, loss = 0.24403719
Iteration 94, loss = 0.24376223
Iteration 95, loss = 0.24351047
Iteration 96, loss = 0.24331732
Iteration 97, loss = 0.24304592
Iteration 98, loss = 0.24283656
Iteration 99, loss = 0.24264584
Iteration 100, loss = 0.24240402
Iteration 101, loss = 0.24220846
Iteration 102, loss = 0.24201577
Iteration 103, loss = 0.24180929
Iteration 104, loss = 0.24163324
Iteration 105, loss = 0.24143292
Iteration 106, loss = 0.24125078
Iteration 107, loss = 0.24109221
Iteration 108, loss = 0.24097036
Iteration 109, loss = 0.24075727
Iteration 110, loss = 0.24061225
Iteration 111, loss = 0.24042728
Iteration 112, loss = 0.24300693
Iteration 113, loss = 0.24646508
Iteration 114, loss = 0.24396369
Iteration 115, loss = 0.24325716
Iteration 116, loss = 0.24214849
Iteration 117, loss = 0.24250087
Iteration 118, loss = 0.24219575
Iteration 119, loss = 0.23809370
Iteration 120, loss = 0.23701458
Iteration 121, loss = 0.23657948
Iteration 122, loss = 0.23640215
Iteration 123, loss = 0.23624592
Iteration 124, loss = 0.23604849
Iteration 125, loss = 0.23591384
Iteration 126, loss = 0.23580118
Iteration 127, loss = 0.23564196
Iteration 128, loss = 0.23552686
Iteration 129, loss = 0.23542573
Iteration 130, loss = 0.23529834
Iteration 131, loss = 0.23522866
Iteration 132, loss = 0.23513233
Iteration 133, loss = 0.23504116
Iteration 134, loss = 0.23494836
Iteration 135, loss = 0.23485652
Iteration 136, loss = 0.23477179
Iteration 137, loss = 0.23471079
Iteration 138, loss = 0.23462358
Iteration 139, loss = 0.23457118
Iteration 140, loss = 0.23452713
Iteration 141, loss = 0.23445576
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.73234762
Iteration 2, loss = 0.63201851
Iteration 3, loss = 0.56900940
Iteration 4, loss = 0.52847326
Iteration 5, loss = 0.48900110
Iteration 6, loss = 0.46430462
Iteration 7, loss = 0.44869418
Iteration 8, loss = 0.43586675
Iteration 9, loss = 0.42331577
Iteration 10, loss = 0.41043196
Iteration 11, loss = 0.39202326
Iteration 12, loss = 0.37862570
Iteration 13, loss = 0.36774204
Iteration 14, loss = 0.36138024
Iteration 15, loss = 0.35112634
Iteration 16, loss = 0.43874525
Iteration 17, loss = 0.44896565
Iteration 18, loss = 0.41985903
Iteration 19, loss = 0.40790972
Iteration 20, loss = 0.39903518
Iteration 21, loss = 0.39221178
Iteration 22, loss = 0.38470356
Iteration 23, loss = 0.37833448
Iteration 24, loss = 0.37161121
Iteration 25, loss = 0.36189417
Iteration 26, loss = 0.35643229
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.76827316
Iteration 2, loss = 0.70156392
Iteration 3, loss = 0.65571355
Iteration 4, loss = 0.62631521
Iteration 5, loss = 0.60468896
Iteration 6, loss = 0.59216808
Iteration 7, loss = 0.58118858
Iteration 8, loss = 0.56768023
Iteration 9, loss = 0.55380556
Iteration 10, loss = 0.54254948
Iteration 11, loss = 0.53706129
Iteration 12, loss = 0.52986589
Iteration 13, loss = 0.52613667
Iteration 14, loss = 0.52364849
Iteration 15, loss = 0.52177512
Iteration 16, loss = 0.52006363
Iteration 17, loss = 0.51885233
Iteration 18, loss = 0.51670876
Iteration 19, loss = 0.51385540
Iteration 20, loss = 0.51133675
Iteration 21, loss = 0.50900616
Iteration 22, loss = 0.50687900
Iteration 23, loss = 0.50446258
Iteration 24, loss = 0.50153437
Iteration 25, loss = 0.49958700
Iteration 26, loss = 0.49801167
Iteration 27, loss = 0.49633889
Iteration 28, loss = 0.49790425
Iteration 29, loss = 0.49694250
Iteration 30, loss = 0.49571518
Iteration 31, loss = 0.49439798
Iteration 32, loss = 0.49286260
Iteration 33, loss = 0.48897154
Iteration 34, loss = 0.48728266
Iteration 35, loss = 0.48565134
Iteration 36, loss = 0.48394100
Iteration 37, loss = 0.48236590
Iteration 38, loss = 0.47882499
Iteration 39, loss = 0.47676423
Iteration 40, loss = 0.47530498
Iteration 41, loss = 0.47359238
Iteration 42, loss = 0.47152665
Iteration 43, loss = 0.47000240
Iteration 44, loss = 0.46837327
Iteration 45, loss = 0.46727517
Iteration 46, loss = 0.47182295
Iteration 47, loss = 0.47419793
Iteration 48, loss = 0.46851013
Iteration 49, loss = 0.46732828
Iteration 50, loss = 0.46617290
Iteration 51, loss = 0.46513385
Iteration 52, loss = 0.45665900
Iteration 53, loss = 0.45377406
Iteration 54, loss = 0.45149716
Iteration 55, loss = 0.44951705
Iteration 56, loss = 0.44777038
Iteration 57, loss = 0.44620521
Iteration 58, loss = 0.44475971
Iteration 59, loss = 0.44339111
Iteration 60, loss = 0.44215507
Iteration 61, loss = 0.44105443
Iteration 62, loss = 0.43995467
Iteration 63, loss = 0.43894342
Iteration 64, loss = 0.43789444
Iteration 65, loss = 0.43703312
Iteration 66, loss = 0.43599634
Iteration 67, loss = 0.43509902
Iteration 68, loss = 0.43422948
Iteration 69, loss = 0.43291461
Iteration 70, loss = 0.43219601
Iteration 71, loss = 0.43152696
Iteration 72, loss = 0.43050243
Iteration 73, loss = 0.42968530
Iteration 74, loss = 0.42910845
Iteration 75, loss = 0.42848300
Iteration 76, loss = 0.42821147
Iteration 77, loss = 0.42705989
Iteration 78, loss = 0.42640260
Iteration 79, loss = 0.42569621
Iteration 80, loss = 0.42518255
Iteration 81, loss = 0.42473024
Iteration 82, loss = 0.42423020
Iteration 83, loss = 0.42370524
Iteration 84, loss = 0.42324497
Iteration 85, loss = 0.42291217
Iteration 86, loss = 0.42204326
Iteration 87, loss = 0.42152063
Iteration 88, loss = 0.42114558
Iteration 89, loss = 0.42067799
Iteration 90, loss = 0.42028917
Iteration 91, loss = 0.41992132
Iteration 92, loss = 0.41958430
Iteration 93, loss = 0.41926586
Iteration 94, loss = 0.41886402
Iteration 95, loss = 0.41854672
Iteration 96, loss = 0.41823459
Iteration 97, loss = 0.41789879
Iteration 98, loss = 0.41766421
Iteration 99, loss = 0.41742237
Iteration 100, loss = 0.41713679
Iteration 101, loss = 0.41685304
Iteration 102, loss = 0.41661698
Iteration 103, loss = 0.41635164
Iteration 104, loss = 0.41613722
Iteration 105, loss = 0.41605187
Iteration 106, loss = 0.41581050
Iteration 107, loss = 0.41556372
Iteration 108, loss = 0.43610688
Iteration 109, loss = 0.43777470
Iteration 110, loss = 0.43768284
Iteration 111, loss = 0.43793142
Iteration 112, loss = 0.43773618
Iteration 113, loss = 0.43750101
Iteration 114, loss = 0.43736037
Iteration 115, loss = 0.43715620
Iteration 116, loss = 0.43698238
Iteration 117, loss = 0.43652681
Iteration 118, loss = 0.43645215
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.63237967
Iteration 2, loss = 0.61120276
Iteration 3, loss = 0.59628873
Iteration 4, loss = 0.58543860
Iteration 5, loss = 0.57673530
Iteration 6, loss = 0.56873478
Iteration 7, loss = 0.56264598
Iteration 8, loss = 0.55726929
Iteration 9, loss = 0.55288622
Iteration 10, loss = 0.54737938
Iteration 11, loss = 0.54346692
Iteration 12, loss = 0.54007341
Iteration 13, loss = 0.53640072
Iteration 14, loss = 0.53260931
Iteration 15, loss = 0.52953934
Iteration 16, loss = 0.52701593
Iteration 17, loss = 0.52469242
Iteration 18, loss = 0.52303157
Iteration 19, loss = 0.52034213
Iteration 20, loss = 0.51846678
Iteration 21, loss = 0.51605686
Iteration 22, loss = 0.51345211
Iteration 23, loss = 0.49104517
Iteration 24, loss = 0.42459913
Iteration 25, loss = 0.39446661
Iteration 26, loss = 0.46641019
Iteration 27, loss = 0.41084143
Iteration 28, loss = 0.38266151
Iteration 29, loss = 0.37692976
Iteration 30, loss = 0.37094067
Iteration 31, loss = 0.36638435
Iteration 32, loss = 0.35530028
Iteration 33, loss = 0.33806316
Iteration 34, loss = 0.33404170
Iteration 35, loss = 0.33029541
Iteration 36, loss = 0.32695822
Iteration 37, loss = 0.35721285
Iteration 38, loss = 0.38723609
Iteration 39, loss = 0.38763812
Iteration 40, loss = 0.38749032
Iteration 41, loss = 0.38716363
Iteration 42, loss = 0.38686817
Iteration 43, loss = 0.38662771
Iteration 44, loss = 0.38642288
Iteration 45, loss = 0.38625680
Iteration 46, loss = 0.38610369
Iteration 47, loss = 0.38598441
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.74622568
Iteration 2, loss = 0.72595518
Iteration 3, loss = 0.71224507
Iteration 4, loss = 0.70353324
Iteration 5, loss = 0.69846454
Iteration 6, loss = 0.69567891
Iteration 7, loss = 0.69434693
Iteration 8, loss = 0.69373929
Iteration 9, loss = 0.69344363
Iteration 10, loss = 0.69330412
Iteration 11, loss = 0.69323864
Iteration 12, loss = 0.69322002
Iteration 13, loss = 0.69320599
Iteration 14, loss = 0.69319416
Iteration 15, loss = 0.62806073
Iteration 16, loss = 0.60060094
Iteration 17, loss = 0.72873712
Iteration 18, loss = 0.95801216
Iteration 19, loss = 0.90639567
Iteration 20, loss = 0.86763859
Iteration 21, loss = 0.83707963
Iteration 22, loss = 0.81321332
Iteration 23, loss = 0.85912548
Iteration 24, loss = 0.83537652
Iteration 25, loss = 0.78883980
Iteration 26, loss = 0.76891021
Iteration 27, loss = 0.75400828
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 1.01071176
Iteration 2, loss = 0.96697123
Iteration 3, loss = 0.92753257
Iteration 4, loss = 0.89322663
Iteration 5, loss = 0.86301324
Iteration 6, loss = 0.83745977
Iteration 7, loss = 0.81601916
Iteration 8, loss = 0.79775413
Iteration 9, loss = 0.78256655
Iteration 10, loss = 0.77009647
Iteration 11, loss = 0.76042269
Iteration 12, loss = 0.75435652
Iteration 13, loss = 0.74802574
Iteration 14, loss = 0.73861931
Iteration 15, loss = 0.73141148
Iteration 16, loss = 0.72497003
Iteration 17, loss = 0.71960978
Iteration 18, loss = 0.71517536
Iteration 19, loss = 0.71118294
Iteration 20, loss = 0.72351049
Iteration 21, loss = 0.69610949
Iteration 22, loss = 0.68683500
Iteration 23, loss = 0.68133521
Iteration 24, loss = 0.67751531
Iteration 25, loss = 0.67469593
Iteration 26, loss = 0.67242801
Iteration 27, loss = 0.67045772
Iteration 28, loss = 0.66870877
Iteration 29, loss = 0.66710770
Iteration 30, loss = 0.66557758
Iteration 31, loss = 0.66413804
Iteration 32, loss = 0.66273253
Iteration 33, loss = 0.66140801
Iteration 34, loss = 0.66010644
Iteration 35, loss = 0.65886835
Iteration 36, loss = 0.65769961
Iteration 37, loss = 0.65555860
Iteration 38, loss = 0.65406347
Iteration 39, loss = 0.65294072
Iteration 40, loss = 0.65186720
Iteration 41, loss = 0.65086143
Iteration 42, loss = 0.64987528
Iteration 43, loss = 0.64894861
Iteration 44, loss = 0.64804711
Iteration 45, loss = 0.64719216
Iteration 46, loss = 0.64638193
Iteration 47, loss = 0.64560733
Iteration 48, loss = 0.64484395
Iteration 49, loss = 0.64413689
Iteration 50, loss = 0.64345532
Iteration 51, loss = 0.64287245
Iteration 52, loss = 0.64225111
Iteration 53, loss = 0.64165697
Iteration 54, loss = 0.64110880
Iteration 55, loss = 0.64055558
Iteration 56, loss = 0.64004227
Iteration 57, loss = 0.63953534
Iteration 58, loss = 0.63906075
Iteration 59, loss = 0.63860513
Iteration 60, loss = 0.63818695
Iteration 61, loss = 0.63776666
Iteration 62, loss = 0.63735638
Iteration 63, loss = 0.63703182
Iteration 64, loss = 0.63662932
Iteration 65, loss = 0.63625657
Iteration 66, loss = 0.63592637
Iteration 67, loss = 0.63561288
Iteration 68, loss = 0.63530063
Iteration 69, loss = 0.63505566
Iteration 70, loss = 0.63478937
Iteration 71, loss = 0.63453521
Iteration 72, loss = 0.63430552
Iteration 73, loss = 0.63406990
Iteration 74, loss = 0.63381783
Iteration 75, loss = 0.63357748
Iteration 76, loss = 0.63335928
Iteration 77, loss = 0.63315631
Iteration 78, loss = 0.63295451
Iteration 79, loss = 0.63274823
Iteration 80, loss = 0.63255435
Iteration 81, loss = 0.63237483
Iteration 82, loss = 0.63219070
Iteration 83, loss = 0.63204159
Iteration 84, loss = 0.63185008
Iteration 85, loss = 0.63170239
Iteration 86, loss = 0.63154375
Iteration 87, loss = 0.63139140
Iteration 88, loss = 0.63126789
Iteration 89, loss = 0.63110992
Iteration 90, loss = 0.63098554
Iteration 91, loss = 0.62382911
Iteration 92, loss = 0.61461407
Iteration 93, loss = 0.61375153
Iteration 94, loss = 0.61403315
Iteration 95, loss = 0.61373160
Iteration 96, loss = 0.61358943
Iteration 97, loss = 0.61340269
Iteration 98, loss = 0.61336155
Iteration 99, loss = 0.61109091
Iteration 100, loss = 0.58642578
Iteration 101, loss = 0.58422340
Iteration 102, loss = 0.58393815
Iteration 103, loss = 0.58376829
Iteration 104, loss = 0.58357353
Iteration 105, loss = 0.58341803
Iteration 106, loss = 0.58323308
Iteration 107, loss = 0.58310545
Iteration 108, loss = 0.58293433
Iteration 109, loss = 0.58282222
Iteration 110, loss = 0.58266081
Iteration 111, loss = 0.58250946
Iteration 112, loss = 0.58239515
Iteration 113, loss = 0.58228286
Iteration 114, loss = 0.58216584
Iteration 115, loss = 0.58208250
Iteration 116, loss = 0.58196084
Iteration 117, loss = 0.58186844
Iteration 118, loss = 0.58176412
Iteration 119, loss = 0.58173745
Iteration 120, loss = 0.58163963
Iteration 121, loss = 0.58152952
Iteration 122, loss = 0.58144422
Iteration 123, loss = 0.58136513
Iteration 124, loss = 0.58132303
Iteration 125, loss = 0.58122658
Iteration 126, loss = 0.58117001
Iteration 127, loss = 0.58111281
Iteration 128, loss = 0.58105088
Iteration 129, loss = 0.58099344
Iteration 130, loss = 0.58091907
Iteration 131, loss = 0.58092203
Iteration 132, loss = 0.58080977
Iteration 133, loss = 0.58076163
Iteration 134, loss = 0.58073841
Iteration 135, loss = 0.58069355
Iteration 136, loss = 0.58063790
Iteration 137, loss = 0.58058512
Iteration 138, loss = 0.58052689
Iteration 139, loss = 0.58050397
Iteration 140, loss = 0.58051459
Iteration 141, loss = 0.58041860
Iteration 142, loss = 0.58037028
Iteration 143, loss = 0.58034762
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.81273082
Iteration 2, loss = 0.73762650
Iteration 3, loss = 0.66164979
Iteration 4, loss = 0.65857055
Iteration 5, loss = 0.65654168
Iteration 6, loss = 0.65516581
Iteration 7, loss = 0.65430580
Iteration 8, loss = 0.65375560
Iteration 9, loss = 0.65343643
Iteration 10, loss = 0.65323251
Iteration 11, loss = 0.65309766
Iteration 12, loss = 0.65302845
Iteration 13, loss = 0.65299472
Iteration 14, loss = 0.65293690
Iteration 15, loss = 0.65290497
Iteration 16, loss = 0.65289558
Iteration 17, loss = 0.65290074
Iteration 18, loss = 0.65289701
Iteration 19, loss = 0.65287002
Iteration 20, loss = 0.65287190
Iteration 21, loss = 0.65288323
Iteration 22, loss = 0.65284968
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.67581954
Iteration 2, loss = 0.67348564
Iteration 3, loss = 0.66701093
Iteration 4, loss = 0.66225142
Iteration 5, loss = 0.65657627
Iteration 6, loss = 0.65037082
Iteration 7, loss = 0.64672740
Iteration 8, loss = 0.64244675
Iteration 9, loss = 0.63791269
Iteration 10, loss = 0.62901208
Iteration 11, loss = 0.61637375
Iteration 12, loss = 0.66242431
Iteration 13, loss = 0.72479290
Iteration 14, loss = 0.70169472
Iteration 15, loss = 0.68675815
Iteration 16, loss = 0.63556064
Iteration 17, loss = 0.59732091
Iteration 18, loss = 0.58175143
Iteration 19, loss = 0.56966727
Iteration 20, loss = 0.56054040
Iteration 21, loss = 0.55301340
Iteration 22, loss = 0.54689771
Iteration 23, loss = 0.54177368
Iteration 24, loss = 0.53754673
Iteration 25, loss = 0.53403632
Iteration 26, loss = 0.53056172
Iteration 27, loss = 0.52766751
Iteration 28, loss = 0.52512188
Iteration 29, loss = 0.52288087
Iteration 30, loss = 0.52088374
Iteration 31, loss = 0.51906184
Iteration 32, loss = 0.51744773
Iteration 33, loss = 0.51599735
Iteration 34, loss = 0.51465029
Iteration 35, loss = 0.51344266
Iteration 36, loss = 0.51301767
Iteration 37, loss = 0.53403923
Iteration 38, loss = 0.53410940
Iteration 39, loss = 0.53386991
Iteration 40, loss = 0.53368957
Iteration 41, loss = 0.53353117
Iteration 42, loss = 0.53340674
Iteration 43, loss = 0.53327691
Iteration 44, loss = 0.53313107
Iteration 45, loss = 0.53302522
Iteration 46, loss = 0.53290763
Iteration 47, loss = 0.53280082
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.74800117
Iteration 2, loss = 0.71348526
Iteration 3, loss = 0.68891566
Iteration 4, loss = 0.66979520
Iteration 5, loss = 0.65341553
Iteration 6, loss = 0.63881034
Iteration 7, loss = 0.62676800
Iteration 8, loss = 0.62384952
Iteration 9, loss = 0.63722324
Iteration 10, loss = 0.60745008
Iteration 11, loss = 0.60085119
Iteration 12, loss = 0.57708996
Iteration 13, loss = 0.54287331
Iteration 14, loss = 0.52432843
Iteration 15, loss = 0.50927547
Iteration 16, loss = 0.48820706
Iteration 17, loss = 0.47347504
Iteration 18, loss = 0.45226529
Iteration 19, loss = 0.43552718
Iteration 20, loss = 0.43932162
Iteration 21, loss = 0.42328392
Iteration 22, loss = 0.39806314
Iteration 23, loss = 0.40455312
Iteration 24, loss = 0.40653014
Iteration 25, loss = 0.41931912
Iteration 26, loss = 0.41582883
Iteration 27, loss = 0.41197828
Iteration 28, loss = 0.40267350
Iteration 29, loss = 0.41099143
Iteration 30, loss = 0.40871964
Iteration 31, loss = 0.40283533
Iteration 32, loss = 0.39423470
Iteration 33, loss = 0.50308295
Iteration 34, loss = 0.47101694
Iteration 35, loss = 0.45440969
Iteration 36, loss = 0.45883611
Iteration 37, loss = 0.45521956
Iteration 38, loss = 0.44577462
Iteration 39, loss = 0.41350022
Iteration 40, loss = 0.43634416
Iteration 41, loss = 0.44203020
Iteration 42, loss = 0.42817402
Iteration 43, loss = 0.41565686
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.73235077
Iteration 2, loss = 0.70336161
Iteration 3, loss = 0.67928010
Iteration 4, loss = 0.65749701
Iteration 5, loss = 0.63077879
Iteration 6, loss = 0.59852553
Iteration 7, loss = 0.57375719
Iteration 8, loss = 0.56443145
Iteration 9, loss = 0.55051780
Iteration 10, loss = 0.51994582
Iteration 11, loss = 0.50650824
Iteration 12, loss = 0.49312025
Iteration 13, loss = 0.48009852
Iteration 14, loss = 0.51715825
Iteration 15, loss = 0.47467155
Iteration 16, loss = 0.46202228
Iteration 17, loss = 0.45157796
Iteration 18, loss = 0.44181133
Iteration 19, loss = 0.46978998
Iteration 20, loss = 0.46557306
Iteration 21, loss = 0.47943592
Iteration 22, loss = 0.46743703
Iteration 23, loss = 0.49528737
Iteration 24, loss = 0.49882704
Iteration 25, loss = 0.48512430
Iteration 26, loss = 0.46332080
Iteration 27, loss = 0.46383432
Iteration 28, loss = 0.44859456
Iteration 29, loss = 0.43204482
Iteration 30, loss = 0.41982894
Iteration 31, loss = 0.49669372
Iteration 32, loss = 0.52390831
Iteration 33, loss = 0.50203607
Iteration 34, loss = 0.48950839
Iteration 35, loss = 0.48265644
Iteration 36, loss = 0.48298521
Iteration 37, loss = 0.47724179
Iteration 38, loss = 0.47120158
Iteration 39, loss = 0.46547010
Iteration 40, loss = 0.45973661
Iteration 41, loss = 0.45208543
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.66266881
Iteration 2, loss = 0.64091958
Iteration 3, loss = 0.61722351
Iteration 4, loss = 0.59503691
Iteration 5, loss = 0.58811357
Iteration 6, loss = 0.55393839
Iteration 7, loss = 0.53184742
Iteration 8, loss = 0.50767594
Iteration 9, loss = 0.48801843
Iteration 10, loss = 0.46827702
Iteration 11, loss = 0.47091762
Iteration 12, loss = 0.44953584
Iteration 13, loss = 0.42652728
Iteration 14, loss = 0.41471494
Iteration 15, loss = 0.41000374
Iteration 16, loss = 0.39845813
Iteration 17, loss = 0.39103392
Iteration 18, loss = 0.38408305
Iteration 19, loss = 0.38613951
Iteration 20, loss = 0.37621969
Iteration 21, loss = 0.36905881
Iteration 22, loss = 0.36466271
Iteration 23, loss = 0.35779273
Iteration 24, loss = 0.34866696
Iteration 25, loss = 0.35448263
Iteration 26, loss = 0.35412788
Iteration 27, loss = 0.33866973
Iteration 28, loss = 0.32850565
Iteration 29, loss = 0.31954368
Iteration 30, loss = 0.32067169
Iteration 31, loss = 0.31464666
Iteration 32, loss = 0.30798672
Iteration 33, loss = 0.31537558
Iteration 34, loss = 0.31833958
Iteration 35, loss = 0.31904677
Iteration 36, loss = 0.34634769
Iteration 37, loss = 0.34348142
Iteration 38, loss = 0.35077687
Iteration 39, loss = 0.36800585
Iteration 40, loss = 0.36438728
Iteration 41, loss = 0.34982102
Iteration 42, loss = 0.34701125
Iteration 43, loss = 0.34464588
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.67824897
Iteration 2, loss = 0.63865932
Iteration 3, loss = 0.61223418
Iteration 4, loss = 0.58009162
Iteration 5, loss = 0.53779732
Iteration 6, loss = 0.51374304
Iteration 7, loss = 0.49140348
Iteration 8, loss = 0.46775731
Iteration 9, loss = 0.47179787
Iteration 10, loss = 0.49334098
Iteration 11, loss = 0.47981988
Iteration 12, loss = 0.46880655
Iteration 13, loss = 0.45814850
Iteration 14, loss = 0.44945456
Iteration 15, loss = 0.43746591
Iteration 16, loss = 0.42231786
Iteration 17, loss = 0.44701631
Iteration 18, loss = 0.43700419
Iteration 19, loss = 0.41091085
Iteration 20, loss = 0.40553078
Iteration 21, loss = 0.39881273
Iteration 22, loss = 0.38195425
Iteration 23, loss = 0.37483903
Iteration 24, loss = 0.37097216
Iteration 25, loss = 0.35987594
Iteration 26, loss = 0.35130354
Iteration 27, loss = 0.34380180
Iteration 28, loss = 0.33353241
Iteration 29, loss = 0.33895931
Iteration 30, loss = 0.35863583
Iteration 31, loss = 0.36916745
Iteration 32, loss = 0.36371988
Iteration 33, loss = 0.36194578
Iteration 34, loss = 0.36331707
Iteration 35, loss = 0.36798455
Iteration 36, loss = 0.36266291
Iteration 37, loss = 0.35749533
Iteration 38, loss = 0.34863351
Iteration 39, loss = 0.33763635
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.62348585
Iteration 2, loss = 0.60459537
Iteration 3, loss = 0.58985366
Iteration 4, loss = 0.57811773
Iteration 5, loss = 0.56117922
Iteration 6, loss = 0.54107941
Iteration 7, loss = 0.52670941
Iteration 8, loss = 0.52983622
Iteration 9, loss = 0.51882963
Iteration 10, loss = 0.50161801
Iteration 11, loss = 0.48787216
Iteration 12, loss = 0.46880274
Iteration 13, loss = 0.45535475
Iteration 14, loss = 0.44173537
Iteration 15, loss = 0.48212669
Iteration 16, loss = 0.47875328
Iteration 17, loss = 0.44056362
Iteration 18, loss = 0.46290983
Iteration 19, loss = 0.55551836
Iteration 20, loss = 0.53234215
Iteration 21, loss = 0.53053173
Iteration 22, loss = 0.50621184
Iteration 23, loss = 0.47533600
Iteration 24, loss = 0.44599963
Iteration 25, loss = 0.42036688
Iteration 26, loss = 0.40865983
Iteration 27, loss = 0.40060442
Iteration 28, loss = 0.39293577
Iteration 29, loss = 0.42519952
Iteration 30, loss = 0.42586611
Iteration 31, loss = 0.41960313
Iteration 32, loss = 0.41466091
Iteration 33, loss = 0.41032755
Iteration 34, loss = 0.40616996
Iteration 35, loss = 0.40079276
Iteration 36, loss = 0.39724444
Iteration 37, loss = 0.49100315
Iteration 38, loss = 0.44916923
Iteration 39, loss = 0.41530642
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.87685376
Iteration 2, loss = 0.75310590
Iteration 3, loss = 0.66993092
Iteration 4, loss = 0.57832341
Iteration 5, loss = 0.53188642
Iteration 6, loss = 0.48401136
Iteration 7, loss = 0.43379775
Iteration 8, loss = 0.41397259
Iteration 9, loss = 0.40818611
Iteration 10, loss = 0.39332126
Iteration 11, loss = 0.36658019
Iteration 12, loss = 0.35789342
Iteration 13, loss = 0.35315983
Iteration 14, loss = 0.35147050
Iteration 15, loss = 0.34344962
Iteration 16, loss = 0.33784746
Iteration 17, loss = 0.33473102
Iteration 18, loss = 0.33087247
Iteration 19, loss = 0.32315907
Iteration 20, loss = 0.30580759
Iteration 21, loss = 0.29991299
Iteration 22, loss = 0.29462084
Iteration 23, loss = 0.28968222
Iteration 24, loss = 0.28505113
Iteration 25, loss = 0.28070971
Iteration 26, loss = 0.27663562
Iteration 27, loss = 0.27277589
Iteration 28, loss = 0.26911164
Iteration 29, loss = 0.26534538
Iteration 30, loss = 0.26169840
Iteration 31, loss = 0.25635568
Iteration 32, loss = 0.25280716
Iteration 33, loss = 0.24991444
Iteration 34, loss = 0.24716475
Iteration 35, loss = 0.24455188
Iteration 36, loss = 0.24207446
Iteration 37, loss = 0.23970375
Iteration 38, loss = 0.23743074
Iteration 39, loss = 0.23526624
Iteration 40, loss = 0.23319379
Iteration 41, loss = 0.23118162
Iteration 42, loss = 0.22921116
Iteration 43, loss = 0.22736430
Iteration 44, loss = 0.22555795
Iteration 45, loss = 0.22384738
Iteration 46, loss = 0.22214719
Iteration 47, loss = 0.22051987
Iteration 48, loss = 0.21895323
Iteration 49, loss = 0.21743814
Iteration 50, loss = 0.21597314
Iteration 51, loss = 0.21453216
Iteration 52, loss = 0.21315940
Iteration 53, loss = 0.21177161
Iteration 54, loss = 0.21035377
Iteration 55, loss = 0.20909151
Iteration 56, loss = 0.20787772
Iteration 57, loss = 0.20633632
Iteration 58, loss = 0.20505359
Iteration 59, loss = 0.20393887
Iteration 60, loss = 0.20282270
Iteration 61, loss = 0.20178221
Iteration 62, loss = 0.20073291
Iteration 63, loss = 0.30807169
Iteration 64, loss = 0.38979090
Iteration 65, loss = 0.30271927
Iteration 66, loss = 0.25619407
Iteration 67, loss = 0.24458530
Iteration 68, loss = 0.23906157
Iteration 69, loss = 0.23613912
Iteration 70, loss = 0.23445974
Iteration 71, loss = 0.23326099
Iteration 72, loss = 0.23238687
Iteration 73, loss = 0.23171108
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.93888706
Iteration 2, loss = 0.79452901
Iteration 3, loss = 0.70445454
Iteration 4, loss = 0.63018535
Iteration 5, loss = 0.57550177
Iteration 6, loss = 0.52818412
Iteration 7, loss = 0.48341476
Iteration 8, loss = 0.44517076
Iteration 9, loss = 0.43045492
Iteration 10, loss = 0.39190749
Iteration 11, loss = 0.36788485
Iteration 12, loss = 0.35138424
Iteration 13, loss = 0.33572193
Iteration 14, loss = 0.32119883
Iteration 15, loss = 0.30644085
Iteration 16, loss = 0.29163760
Iteration 17, loss = 0.28175578
Iteration 18, loss = 0.34534853
Iteration 19, loss = 0.31224381
Iteration 20, loss = 0.26093452
Iteration 21, loss = 0.25670540
Iteration 22, loss = 0.26888550
Iteration 23, loss = 0.26746470
Iteration 24, loss = 0.25033620
Iteration 25, loss = 0.23517273
Iteration 26, loss = 0.22654024
Iteration 27, loss = 0.22144743
Iteration 28, loss = 0.21818581
Iteration 29, loss = 0.21563908
Iteration 30, loss = 0.21635938
Iteration 31, loss = 0.21340081
Iteration 32, loss = 0.22483249
Iteration 33, loss = 0.22180889
Iteration 34, loss = 0.21824496
Iteration 35, loss = 0.21563698
Iteration 36, loss = 0.21248304
Iteration 37, loss = 0.20999092
Iteration 38, loss = 0.20826356
Iteration 39, loss = 0.20665242
Iteration 40, loss = 0.20511382
Iteration 41, loss = 0.20371805
Iteration 42, loss = 0.20244462
Iteration 43, loss = 0.20117971
Iteration 44, loss = 0.19993291
Iteration 45, loss = 0.19845971
Iteration 46, loss = 0.19739482
Iteration 47, loss = 0.19639685
Iteration 48, loss = 0.19544155
Iteration 49, loss = 0.19453260
Iteration 50, loss = 0.19368147
Iteration 51, loss = 0.19284073
Iteration 52, loss = 0.19203774
Iteration 53, loss = 0.19126542
Iteration 54, loss = 0.19053405
Iteration 55, loss = 0.19550482
Iteration 56, loss = 0.21239988
Iteration 57, loss = 0.20705107
Iteration 58, loss = 0.20563020
Iteration 59, loss = 0.20487020
Iteration 60, loss = 0.20419678
Iteration 61, loss = 0.20363742
Iteration 62, loss = 0.20311749
Iteration 63, loss = 0.20260936
Iteration 64, loss = 0.20217268
Iteration 65, loss = 0.20170495
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.75524994
Iteration 2, loss = 0.66251918
Iteration 3, loss = 0.60646735
Iteration 4, loss = 0.56230182
Iteration 5, loss = 0.52413687
Iteration 6, loss = 0.49120152
Iteration 7, loss = 0.46262850
Iteration 8, loss = 0.43564104
Iteration 9, loss = 0.41106363
Iteration 10, loss = 0.39124540
Iteration 11, loss = 0.37337292
Iteration 12, loss = 0.35723322
Iteration 13, loss = 0.34368104
Iteration 14, loss = 0.32044367
Iteration 15, loss = 0.30920920
Iteration 16, loss = 0.30005260
Iteration 17, loss = 0.29248381
Iteration 18, loss = 0.28367599
Iteration 19, loss = 0.27722254
Iteration 20, loss = 0.27166652
Iteration 21, loss = 0.26665673
Iteration 22, loss = 0.26219983
Iteration 23, loss = 0.25638108
Iteration 24, loss = 0.25243967
Iteration 25, loss = 0.24886116
Iteration 26, loss = 0.24529767
Iteration 27, loss = 0.24368720
Iteration 28, loss = 0.24106231
Iteration 29, loss = 0.23856623
Iteration 30, loss = 0.24954801
Iteration 31, loss = 0.30579965
Iteration 32, loss = 0.27832598
Iteration 33, loss = 0.25164043
Iteration 34, loss = 0.24857663
Iteration 35, loss = 0.25147677
Iteration 36, loss = 0.35145546
Iteration 37, loss = 0.25772102
Iteration 38, loss = 0.25234731
Iteration 39, loss = 0.24977411
Iteration 40, loss = 0.24779840
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.84376126
Iteration 2, loss = 0.77691261
Iteration 3, loss = 0.73824318
Iteration 4, loss = 0.71708462
Iteration 5, loss = 0.69952347
Iteration 6, loss = 0.67409923
Iteration 7, loss = 0.65373084
Iteration 8, loss = 0.63566278
Iteration 9, loss = 0.61985349
Iteration 10, loss = 0.60308380
Iteration 11, loss = 0.58754173
Iteration 12, loss = 0.57379951
Iteration 13, loss = 0.55681498
Iteration 14, loss = 0.54255765
Iteration 15, loss = 0.52302578
Iteration 16, loss = 0.51178422
Iteration 17, loss = 0.49859185
Iteration 18, loss = 0.48333293
Iteration 19, loss = 0.50240877
Iteration 20, loss = 0.49609450
Iteration 21, loss = 0.48172738
Iteration 22, loss = 0.49933852
Iteration 23, loss = 0.52912935
Iteration 24, loss = 0.42056221
Iteration 25, loss = 0.40506312
Iteration 26, loss = 0.43227871
Iteration 27, loss = 0.42513188
Iteration 28, loss = 0.38459677
Iteration 29, loss = 0.35929765
Iteration 30, loss = 0.35519176
Iteration 31, loss = 0.34256021
Iteration 32, loss = 0.33492056
Iteration 33, loss = 0.32844812
Iteration 34, loss = 0.32251578
Iteration 35, loss = 0.31694282
Iteration 36, loss = 0.31175314
Iteration 37, loss = 0.30685403
Iteration 38, loss = 0.30220346
Iteration 39, loss = 0.29782202
Iteration 40, loss = 0.29364370
Iteration 41, loss = 0.28969273
Iteration 42, loss = 0.28592210
Iteration 43, loss = 0.28234480
Iteration 44, loss = 0.27891886
Iteration 45, loss = 0.27555237
Iteration 46, loss = 0.27225526
Iteration 47, loss = 0.26920695
Iteration 48, loss = 0.26636305
Iteration 49, loss = 0.26365734
Iteration 50, loss = 0.26106366
Iteration 51, loss = 0.25854936
Iteration 52, loss = 0.25617232
Iteration 53, loss = 0.25390425
Iteration 54, loss = 0.25173633
Iteration 55, loss = 0.24967064
Iteration 56, loss = 0.24768449
Iteration 57, loss = 0.24572279
Iteration 58, loss = 0.24392114
Iteration 59, loss = 0.24214585
Iteration 60, loss = 0.24041734
Iteration 61, loss = 0.23876568
Iteration 62, loss = 0.23721834
Iteration 63, loss = 0.23571534
Iteration 64, loss = 0.23430982
Iteration 65, loss = 0.23295769
Iteration 66, loss = 0.23162480
Iteration 67, loss = 0.23039454
Iteration 68, loss = 0.22915495
Iteration 69, loss = 0.22799237
Iteration 70, loss = 0.22688071
Iteration 71, loss = 0.22582104
Iteration 72, loss = 0.22478522
Iteration 73, loss = 0.22379980
Iteration 74, loss = 0.22282067
Iteration 75, loss = 0.22197338
Iteration 76, loss = 0.22103125
Iteration 77, loss = 0.22021104
Iteration 78, loss = 0.21938379
Iteration 79, loss = 0.21862758
Iteration 80, loss = 0.21787535
Iteration 81, loss = 0.21716920
Iteration 82, loss = 0.21644382
Iteration 83, loss = 0.21577613
Iteration 84, loss = 0.21515753
Iteration 85, loss = 0.21448867
Iteration 86, loss = 0.21392152
Iteration 87, loss = 0.21334127
Iteration 88, loss = 0.21284430
Iteration 89, loss = 0.21227381
Iteration 90, loss = 0.21180436
Iteration 91, loss = 0.21128834
Iteration 92, loss = 0.21081380
Iteration 93, loss = 0.21039714
Iteration 94, loss = 0.20994192
Iteration 95, loss = 0.20951522
Iteration 96, loss = 0.20911982
Iteration 97, loss = 0.20919076
Iteration 98, loss = 0.20891886
Iteration 99, loss = 0.20859606
Iteration 100, loss = 0.20827007
Iteration 101, loss = 0.20794358
Iteration 102, loss = 0.20761924
Iteration 103, loss = 0.20729119
Iteration 104, loss = 0.20703268
Iteration 105, loss = 0.20674600
Iteration 106, loss = 0.20649820
Iteration 107, loss = 0.20619251
Iteration 108, loss = 0.20595039
Iteration 109, loss = 0.20573932
Iteration 110, loss = 0.20553635
Iteration 111, loss = 0.20528556
Iteration 112, loss = 0.20486146
Iteration 113, loss = 0.20462525
Iteration 114, loss = 0.20443617
Iteration 115, loss = 0.20409391
Iteration 116, loss = 0.20392405
Iteration 117, loss = 0.20374816
Iteration 118, loss = 0.20360180
Iteration 119, loss = 0.20344467
Iteration 120, loss = 0.20325478
Iteration 121, loss = 0.20312063
Iteration 122, loss = 0.20296822
Iteration 123, loss = 0.20278914
Iteration 124, loss = 0.20267330
Iteration 125, loss = 0.20256495
Iteration 126, loss = 0.20244485
Iteration 127, loss = 0.20234485
Iteration 128, loss = 0.20220944
Iteration 129, loss = 0.20209254
Iteration 130, loss = 0.20201502
Iteration 131, loss = 0.20189108
Iteration 132, loss = 0.19988295
Iteration 133, loss = 0.19793790
Iteration 134, loss = 0.19784445
Iteration 135, loss = 0.19775586
Iteration 136, loss = 0.19767534
Iteration 137, loss = 0.19764997
Iteration 138, loss = 0.19743350
Iteration 139, loss = 0.20903065
Iteration 140, loss = 0.28918587
Iteration 141, loss = 0.26967979
Iteration 142, loss = 0.24411397
Iteration 143, loss = 0.22609902
Iteration 144, loss = 0.21354761
Iteration 145, loss = 0.20594694
Iteration 146, loss = 0.20126551
Iteration 147, loss = 0.19870584
Iteration 148, loss = 0.19711046
Iteration 149, loss = 0.19611553
Iteration 150, loss = 0.19538603
Iteration 151, loss = 0.19484788
Iteration 152, loss = 0.19438047
Iteration 153, loss = 0.19400564
Iteration 154, loss = 0.19366106
Iteration 155, loss = 0.19331338
Iteration 156, loss = 0.19305095
Iteration 157, loss = 0.19278324
Iteration 158, loss = 0.19248009
Iteration 159, loss = 0.19222095
Iteration 160, loss = 0.19199965
Iteration 161, loss = 0.19174727
Iteration 162, loss = 0.19153301
Iteration 163, loss = 0.19131315
Iteration 164, loss = 0.19097360
Iteration 165, loss = 0.19073788
Iteration 166, loss = 0.19057627
Iteration 167, loss = 0.19041904
Iteration 168, loss = 0.19022597
Iteration 169, loss = 0.19006014
Iteration 170, loss = 0.18987181
Iteration 171, loss = 0.18972313
Iteration 172, loss = 0.18960471
Iteration 173, loss = 0.18949807
Iteration 174, loss = 0.18932133
Iteration 175, loss = 0.18923055
Iteration 176, loss = 0.18904250
Iteration 177, loss = 0.18896456
Iteration 178, loss = 0.18881405
Iteration 179, loss = 0.43010776
Iteration 180, loss = 0.67760919
Iteration 181, loss = 0.49044810
Iteration 182, loss = 0.46505539
Iteration 183, loss = 0.29306900
Iteration 184, loss = 0.27278267
Iteration 185, loss = 0.25343845
Iteration 186, loss = 0.24507868
Iteration 187, loss = 0.24123786
Iteration 188, loss = 0.23938282
Iteration 189, loss = 0.23837415
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.66099376
Iteration 2, loss = 0.59915748
Iteration 3, loss = 0.55796909
Iteration 4, loss = 0.48906383
Iteration 5, loss = 0.47814884
Iteration 6, loss = 0.46515119
Iteration 7, loss = 0.45806239
Iteration 8, loss = 0.42561907
Iteration 9, loss = 0.39606645
Iteration 10, loss = 0.39686110
Iteration 11, loss = 0.39097641
Iteration 12, loss = 0.36041969
Iteration 13, loss = 0.31760066
Iteration 14, loss = 0.29247214
Iteration 15, loss = 0.27967979
Iteration 16, loss = 0.27010826
Iteration 17, loss = 0.26311065
Iteration 18, loss = 0.25598249
Iteration 19, loss = 0.24907078
Iteration 20, loss = 0.24275005
Iteration 21, loss = 0.23543618
Iteration 22, loss = 0.23026653
Iteration 23, loss = 0.22534434
Iteration 24, loss = 0.22105431
Iteration 25, loss = 0.21738081
Iteration 26, loss = 0.20796598
Iteration 27, loss = 0.20957822
Iteration 28, loss = 0.20333736
Iteration 29, loss = 0.20064884
Iteration 30, loss = 0.19796945
Iteration 31, loss = 0.19553142
Iteration 32, loss = 0.19323017
Iteration 33, loss = 0.19098976
Iteration 34, loss = 0.18892789
Iteration 35, loss = 0.18697375
Iteration 36, loss = 0.23131508
Iteration 37, loss = 0.18477992
Iteration 38, loss = 0.18274644
Iteration 39, loss = 0.18107476
Iteration 40, loss = 0.17953610
Iteration 41, loss = 0.17808745
Iteration 42, loss = 0.17668963
Iteration 43, loss = 0.17538983
Iteration 44, loss = 0.17413342
Iteration 45, loss = 0.17293472
Iteration 46, loss = 0.17180759
Iteration 47, loss = 0.17071795
Iteration 48, loss = 0.16969442
Iteration 49, loss = 0.16888855
Iteration 50, loss = 0.23485520
Iteration 51, loss = 0.19752107
Iteration 52, loss = 0.19035485
Iteration 53, loss = 0.18643573
Iteration 54, loss = 0.18359837
Iteration 55, loss = 0.18143259
Iteration 56, loss = 0.17971642
Iteration 57, loss = 0.17828137
Iteration 58, loss = 0.17711515
Iteration 59, loss = 0.17609576
Iteration 60, loss = 0.17520512
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.90685591
Iteration 2, loss = 0.50985014
Iteration 3, loss = 0.37161717
Iteration 4, loss = 0.28751906
Iteration 5, loss = 0.24359321
Iteration 6, loss = 0.22469422
Iteration 7, loss = 0.21746691
Iteration 8, loss = 0.20160095
Iteration 9, loss = 0.19519259
Iteration 10, loss = 0.19141886
Iteration 11, loss = 0.17746181
Iteration 12, loss = 0.16367084
Iteration 13, loss = 0.17509545
Iteration 14, loss = 0.17751711
Iteration 15, loss = 0.17930813
Iteration 16, loss = 0.18200304
Iteration 17, loss = 0.17734243
Iteration 18, loss = 0.17751976
Iteration 19, loss = 0.17716436
Iteration 20, loss = 0.18259434
Iteration 21, loss = 0.18212653
Iteration 22, loss = 0.19157478
Iteration 23, loss = 0.17515505
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.43326052
Iteration 2, loss = 0.29530224
Iteration 3, loss = 0.25957993
Iteration 4, loss = 0.23116028
Iteration 5, loss = 0.22186289
Iteration 6, loss = 0.20124796
Iteration 7, loss = 0.21100872
Iteration 8, loss = 0.21716820
Iteration 9, loss = 0.20312090
Iteration 10, loss = 0.19781420
Iteration 11, loss = 0.19716734
Iteration 12, loss = 0.20028679
Iteration 13, loss = 0.21285591
Iteration 14, loss = 0.19388036
Iteration 15, loss = 0.18974327
Iteration 16, loss = 0.18606529
Iteration 17, loss = 0.17866861
Iteration 18, loss = 0.17564841
Iteration 19, loss = 0.18196068
Iteration 20, loss = 0.18147055
Iteration 21, loss = 0.18433985
Iteration 22, loss = 0.18143019
Iteration 23, loss = 0.18328072
Iteration 24, loss = 0.19507200
Iteration 25, loss = 0.18681708
Iteration 26, loss = 0.18676946
Iteration 27, loss = 0.18051239
Iteration 28, loss = 0.18289541
Iteration 29, loss = 0.18304957
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.49822661
Iteration 2, loss = 0.31350040
Iteration 3, loss = 0.26722329
Iteration 4, loss = 0.24270450
Iteration 5, loss = 0.23121922
Iteration 6, loss = 0.21604261
Iteration 7, loss = 0.20960931
Iteration 8, loss = 0.21220958
Iteration 9, loss = 0.20570038
Iteration 10, loss = 0.19683144
Iteration 11, loss = 0.19988785
Iteration 12, loss = 0.19493616
Iteration 13, loss = 0.18426520
Iteration 14, loss = 0.17328732
Iteration 15, loss = 0.17104245
Iteration 16, loss = 0.18295072
Iteration 17, loss = 0.17242529
Iteration 18, loss = 0.16362813
Iteration 19, loss = 0.16323397
Iteration 20, loss = 0.15948756
Iteration 21, loss = 0.16021681
Iteration 22, loss = 0.16367213
Iteration 23, loss = 0.15767298
Iteration 24, loss = 0.15278853
Iteration 25, loss = 0.15514336
Iteration 26, loss = 0.15454646
Iteration 27, loss = 0.15222551
Iteration 28, loss = 0.15239849
Iteration 29, loss = 0.15082333
Iteration 30, loss = 0.14942940
Iteration 31, loss = 0.14919054
Iteration 32, loss = 0.14872019
Iteration 33, loss = 0.14734910
Iteration 34, loss = 0.14861812
Iteration 35, loss = 0.14678142
Iteration 36, loss = 0.16374141
Iteration 37, loss = 0.16211498
Iteration 38, loss = 0.15880154
Iteration 39, loss = 0.15802705
Iteration 40, loss = 0.15710800
Iteration 41, loss = 0.15650920
Iteration 42, loss = 0.15531805
Iteration 43, loss = 0.15445625
Iteration 44, loss = 0.15517252
Iteration 45, loss = 0.16032534
Iteration 46, loss = 0.16447115
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.53703394
Iteration 2, loss = 0.32489490
Iteration 3, loss = 0.25625969
Iteration 4, loss = 0.24279359
Iteration 5, loss = 0.23310222
Iteration 6, loss = 0.22441231
Iteration 7, loss = 0.21018430
Iteration 8, loss = 0.18201397
Iteration 9, loss = 0.18010179
Iteration 10, loss = 0.16899958
Iteration 11, loss = 0.17960460
Iteration 12, loss = 0.19703130
Iteration 13, loss = 0.20803125
Iteration 14, loss = 0.20008058
Iteration 15, loss = 0.19572853
Iteration 16, loss = 0.19406770
Iteration 17, loss = 0.19838710
Iteration 18, loss = 0.19636216
Iteration 19, loss = 0.19027492
Iteration 20, loss = 0.18630546
Iteration 21, loss = 0.20188985
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.56681387
Iteration 2, loss = 0.35697402
Iteration 3, loss = 0.27893847
Iteration 4, loss = 0.24461271
Iteration 5, loss = 0.20151139
Iteration 6, loss = 0.19790674
Iteration 7, loss = 0.19121366
Iteration 8, loss = 0.17766465
Iteration 9, loss = 0.20255017
Iteration 10, loss = 0.20712776
Iteration 11, loss = 0.19748510
Iteration 12, loss = 0.19855620
Iteration 13, loss = 0.19479595
Iteration 14, loss = 0.17564295
Iteration 15, loss = 0.18819947
Iteration 16, loss = 0.18567902
Iteration 17, loss = 0.17753006
Iteration 18, loss = 0.17523220
Iteration 19, loss = 0.18060193
Iteration 20, loss = 0.17967078
Iteration 21, loss = 0.17866868
Iteration 22, loss = 0.17914301
Iteration 23, loss = 0.17762038
Iteration 24, loss = 0.17596915
Iteration 25, loss = 0.17433735
Iteration 26, loss = 0.17347136
Iteration 27, loss = 0.18484857
Iteration 28, loss = 0.17995415
Iteration 29, loss = 0.17725632
Iteration 30, loss = 0.17548632
Iteration 31, loss = 0.17526431
Iteration 32, loss = 0.17753750
Iteration 33, loss = 0.17617493
Iteration 34, loss = 0.17530390
Iteration 35, loss = 0.17437148
Iteration 36, loss = 0.17392207
Iteration 37, loss = 0.17336863
Iteration 38, loss = 0.17183468
Iteration 39, loss = 0.17135488
Iteration 40, loss = 0.17091121
Iteration 41, loss = 0.17015592
Iteration 42, loss = 0.16963997
Iteration 43, loss = 0.16871241
Iteration 44, loss = 0.16832603
Iteration 45, loss = 0.16376330
Iteration 46, loss = 0.15820289
Iteration 47, loss = 0.16087455
Iteration 48, loss = 0.16666510
Iteration 49, loss = 0.16354293
Iteration 50, loss = 0.16442822
Iteration 51, loss = 0.16462548
Iteration 52, loss = 0.16472348
Iteration 53, loss = 0.16502937
Iteration 54, loss = 0.16392091
Iteration 55, loss = 0.15181585
Iteration 56, loss = 0.14919037
Iteration 57, loss = 0.16492142
Iteration 58, loss = 0.16253560
Iteration 59, loss = 0.16401783
Iteration 60, loss = 0.16496381
Iteration 61, loss = 0.16401959
Iteration 62, loss = 0.16366431
Iteration 63, loss = 0.16286967
Iteration 64, loss = 0.15660030
Iteration 65, loss = 0.15839404
Iteration 66, loss = 0.15713411
Iteration 67, loss = 0.15366886
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 15.04111359
Iteration 2, loss = 5.91150403
Iteration 3, loss = 4.12478348
Iteration 4, loss = 3.98896796
Iteration 5, loss = 3.36147129
Iteration 6, loss = 3.62621349
Iteration 7, loss = 4.04863510
Iteration 8, loss = 3.51787570
Iteration 9, loss = 3.42968919
Iteration 10, loss = 3.39241624
Iteration 11, loss = 3.80243494
Iteration 12, loss = 3.34579391
Iteration 13, loss = 2.91082387
Iteration 14, loss = 3.09366977
Iteration 15, loss = 2.82330509
Iteration 16, loss = 2.81135456
Iteration 17, loss = 3.04204032
Iteration 18, loss = 3.07518609
Iteration 19, loss = 3.57856438
Iteration 20, loss = 3.45602398
Iteration 21, loss = 3.70232220
Iteration 22, loss = 2.77042132
Iteration 23, loss = 2.78381672
Iteration 24, loss = 3.05675550
Iteration 25, loss = 2.55824670
Iteration 26, loss = 2.60060839
Iteration 27, loss = 4.13392859
Iteration 28, loss = 3.29418022
Iteration 29, loss = 2.66467011
Iteration 30, loss = 2.87119748
Iteration 31, loss = 3.05077797
Iteration 32, loss = 2.58749505
Iteration 33, loss = 2.97996673
Iteration 34, loss = 3.52774716
Iteration 35, loss = 2.63715001
Iteration 36, loss = 2.75545350
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 10.91692788
Iteration 2, loss = 3.57817215
Iteration 3, loss = 3.16347976
Iteration 4, loss = 3.42087860
Iteration 5, loss = 3.79290897
Iteration 6, loss = 3.51333500
Iteration 7, loss = 2.77072099
Iteration 8, loss = 3.76879714
Iteration 9, loss = 2.84849934
Iteration 10, loss = 2.95953638
Iteration 11, loss = 2.78481967
Iteration 12, loss = 2.63159483
Iteration 13, loss = 3.58918860
Iteration 14, loss = 3.34683273
Iteration 15, loss = 3.39554000
Iteration 16, loss = 4.00385261
Iteration 17, loss = 2.91567425
Iteration 18, loss = 2.97073790
Iteration 19, loss = 2.84404812
Iteration 20, loss = 2.07415255
Iteration 21, loss = 2.26423883
Iteration 22, loss = 6.11248741
Iteration 23, loss = 3.60571477
Iteration 24, loss = 2.45015522
Iteration 25, loss = 2.87565480
Iteration 26, loss = 2.80874279
Iteration 27, loss = 3.28784603
Iteration 28, loss = 3.31753396
Iteration 29, loss = 2.53536481
Iteration 30, loss = 3.33734029
Iteration 31, loss = 3.16913914
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 10.73785045
Iteration 2, loss = 6.33025801
Iteration 3, loss = 4.14946797
Iteration 4, loss = 3.02855050
Iteration 5, loss = 3.45270672
Iteration 6, loss = 4.22839587
Iteration 7, loss = 2.94754856
Iteration 8, loss = 3.30597248
Iteration 9, loss = 2.72956324
Iteration 10, loss = 3.50397334
Iteration 11, loss = 3.06155000
Iteration 12, loss = 3.11046749
Iteration 13, loss = 3.40530317
Iteration 14, loss = 3.49060546
Iteration 15, loss = 3.00986643
Iteration 16, loss = 2.43574597
Iteration 17, loss = 3.39840134
Iteration 18, loss = 4.40506578
Iteration 19, loss = 2.62429032
Iteration 20, loss = 2.92993647
Iteration 21, loss = 3.11833380
Iteration 22, loss = 2.37401731
Iteration 23, loss = 2.77955782
Iteration 24, loss = 3.08012712
Iteration 25, loss = 2.54042874
Iteration 26, loss = 2.33521499
Iteration 27, loss = 2.66572114
Iteration 28, loss = 3.52759597
Iteration 29, loss = 2.60898457
Iteration 30, loss = 3.56735005
Iteration 31, loss = 3.18546740
Iteration 32, loss = 2.67788289
Iteration 33, loss = 2.09632895
Iteration 34, loss = 2.82440464
Iteration 35, loss = 2.95037922
Iteration 36, loss = 3.34439984
Iteration 37, loss = 3.14740561
Iteration 38, loss = 2.92265291
Iteration 39, loss = 2.81991156
Iteration 40, loss = 2.20217186
Iteration 41, loss = 2.03392191
Iteration 42, loss = 1.89649073
Iteration 43, loss = 2.66353970
Iteration 44, loss = 2.02643629
Iteration 45, loss = 2.21553363
Iteration 46, loss = 3.23589981
Iteration 47, loss = 2.48072328
Iteration 48, loss = 2.12652531
Iteration 49, loss = 1.81475360
Iteration 50, loss = 3.63620482
Iteration 51, loss = 3.17871016
Iteration 52, loss = 2.34061824
Iteration 53, loss = 4.11839712
Iteration 54, loss = 2.83086683
Iteration 55, loss = 2.74204068
Iteration 56, loss = 2.21540239
Iteration 57, loss = 2.76619453
Iteration 58, loss = 2.40582258
Iteration 59, loss = 2.12191784
Iteration 60, loss = 3.33674852
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 13.42175008
Iteration 2, loss = 7.28304071
Iteration 3, loss = 6.12447460
Iteration 4, loss = 6.44919963
Iteration 5, loss = 3.52319119
Iteration 6, loss = 4.07802819
Iteration 7, loss = 3.82320673
Iteration 8, loss = 3.24350869
Iteration 9, loss = 2.87114229
Iteration 10, loss = 2.55731460
Iteration 11, loss = 2.98867783
Iteration 12, loss = 5.50427439
Iteration 13, loss = 4.13207403
Iteration 14, loss = 2.31379429
Iteration 15, loss = 3.38221392
Iteration 16, loss = 2.71993972
Iteration 17, loss = 2.53310986
Iteration 18, loss = 2.72963404
Iteration 19, loss = 2.95792091
Iteration 20, loss = 3.08411229
Iteration 21, loss = 2.48613119
Iteration 22, loss = 3.16826366
Iteration 23, loss = 3.11554916
Iteration 24, loss = 3.36657876
Iteration 25, loss = 2.53208781
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 11.35297880
Iteration 2, loss = 7.62712361
Iteration 3, loss = 5.51562134
Iteration 4, loss = 5.46937941
Iteration 5, loss = 3.99700387
Iteration 6, loss = 4.05633225
Iteration 7, loss = 3.84174499
Iteration 8, loss = 3.94102623
Iteration 9, loss = 3.50889067
Iteration 10, loss = 3.22443791
Iteration 11, loss = 3.67950403
Iteration 12, loss = 2.80183500
Iteration 13, loss = 2.45141481
Iteration 14, loss = 2.90375588
Iteration 15, loss = 3.10892501
Iteration 16, loss = 3.74048639
Iteration 17, loss = 3.43805184
Iteration 18, loss = 2.63134626
Iteration 19, loss = 4.84554215
Iteration 20, loss = 3.69015848
Iteration 21, loss = 3.18594563
Iteration 22, loss = 2.67488899
Iteration 23, loss = 2.77143453
Iteration 24, loss = 3.25616240
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.56409446
Iteration 2, loss = 0.43242678
Iteration 3, loss = 0.36970000
Iteration 4, loss = 0.31784045
Iteration 5, loss = 0.29527168
Iteration 6, loss = 0.28108184
Iteration 7, loss = 0.26976555
Iteration 8, loss = 0.25924556
Iteration 9, loss = 0.25402671
Iteration 10, loss = 0.24100213
Iteration 11, loss = 0.27512614
Iteration 12, loss = 0.26118209
Iteration 13, loss = 0.26554394
Iteration 14, loss = 0.26978382
Iteration 15, loss = 0.27663439
Iteration 16, loss = 0.26605874
Iteration 17, loss = 0.25137772
Iteration 18, loss = 0.25794550
Iteration 19, loss = 0.25009149
Iteration 20, loss = 0.25485792
Iteration 21, loss = 0.25063032
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.52348241
Iteration 2, loss = 0.39964105
Iteration 3, loss = 0.33130109
Iteration 4, loss = 0.29299774
Iteration 5, loss = 0.28763457
Iteration 6, loss = 0.28090596
Iteration 7, loss = 0.27178704
Iteration 8, loss = 0.25983482
Iteration 9, loss = 0.25362106
Iteration 10, loss = 0.25468138
Iteration 11, loss = 0.23908300
Iteration 12, loss = 0.23227605
Iteration 13, loss = 0.23435813
Iteration 14, loss = 0.21543560
Iteration 15, loss = 0.23196791
Iteration 16, loss = 0.23502845
Iteration 17, loss = 0.23129996
Iteration 18, loss = 0.24101552
Iteration 19, loss = 0.23703740
Iteration 20, loss = 0.22886143
Iteration 21, loss = 0.23985686
Iteration 22, loss = 0.23098687
Iteration 23, loss = 0.22554121
Iteration 24, loss = 0.22663142
Iteration 25, loss = 0.22549519
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.65802689
Iteration 2, loss = 0.47373759
Iteration 3, loss = 0.37789185
Iteration 4, loss = 0.31239914
Iteration 5, loss = 0.27874637
Iteration 6, loss = 0.26338094
Iteration 7, loss = 0.23550084
Iteration 8, loss = 0.21330744
Iteration 9, loss = 0.20496748
Iteration 10, loss = 0.21958665
Iteration 11, loss = 0.21050658
Iteration 12, loss = 0.21551959
Iteration 13, loss = 0.22848057
Iteration 14, loss = 0.23745475
Iteration 15, loss = 0.26057352
Iteration 16, loss = 0.24514293
Iteration 17, loss = 0.24756004
Iteration 18, loss = 0.23742250
Iteration 19, loss = 0.23193454
Iteration 20, loss = 0.24205675
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.59871184
Iteration 2, loss = 0.44491481
Iteration 3, loss = 0.37545055
Iteration 4, loss = 0.33583183
Iteration 5, loss = 0.29571336
Iteration 6, loss = 0.27842424
Iteration 7, loss = 0.27449448
Iteration 8, loss = 0.26425456
Iteration 9, loss = 0.25694000
Iteration 10, loss = 0.25743687
Iteration 11, loss = 0.26897473
Iteration 12, loss = 0.22808056
Iteration 13, loss = 0.21341251
Iteration 14, loss = 0.20875197
Iteration 15, loss = 0.20701288
Iteration 16, loss = 0.21764293
Iteration 17, loss = 0.21373438
Iteration 18, loss = 0.21720605
Iteration 19, loss = 0.21723714
Iteration 20, loss = 0.21566724
Iteration 21, loss = 0.19257324
Iteration 22, loss = 0.19227749
Iteration 23, loss = 0.19408067
Iteration 24, loss = 0.20290455
Iteration 25, loss = 0.20212856
Iteration 26, loss = 0.21083719
Iteration 27, loss = 0.21377779
Iteration 28, loss = 0.20699716
Iteration 29, loss = 0.20270488
Iteration 30, loss = 0.19921876
Iteration 31, loss = 0.19944305
Iteration 32, loss = 0.19910025
Iteration 33, loss = 0.18824457
Iteration 34, loss = 0.18104503
Iteration 35, loss = 0.19025520
Iteration 36, loss = 0.20686150
Iteration 37, loss = 0.20405240
Iteration 38, loss = 0.20384225
Iteration 39, loss = 0.20078693
Iteration 40, loss = 0.20023488
Iteration 41, loss = 0.20842273
Iteration 42, loss = 0.20769250
Iteration 43, loss = 0.20713514
Iteration 44, loss = 0.20837692
Iteration 45, loss = 0.20770585
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.61942104
Iteration 2, loss = 0.48325966
Iteration 3, loss = 0.41710570
Iteration 4, loss = 0.35933154
Iteration 5, loss = 0.31622285
Iteration 6, loss = 0.32030245
Iteration 7, loss = 0.29278667
Iteration 8, loss = 0.27316440
Iteration 9, loss = 0.25849385
Iteration 10, loss = 0.26673041
Iteration 11, loss = 0.25867380
Iteration 12, loss = 0.24585023
Iteration 13, loss = 0.23959143
Iteration 14, loss = 0.22803305
Iteration 15, loss = 0.23423076
Iteration 16, loss = 0.23673001
Iteration 17, loss = 0.23092352
Iteration 18, loss = 0.22115687
Iteration 19, loss = 0.21124571
Iteration 20, loss = 0.21210407
Iteration 21, loss = 0.21364707
Iteration 22, loss = 0.22951476
Iteration 23, loss = 0.23381399
Iteration 24, loss = 0.22937663
Iteration 25, loss = 0.22389464
Iteration 26, loss = 0.22844427
Iteration 27, loss = 0.21935404
Iteration 28, loss = 0.21390468
Iteration 29, loss = 0.20989300
Iteration 30, loss = 0.21159898
Iteration 31, loss = 0.19563057
Iteration 32, loss = 0.19876530
Iteration 33, loss = 0.19739602
Iteration 34, loss = 0.19212168
Iteration 35, loss = 0.19343221
Iteration 36, loss = 0.20796264
Iteration 37, loss = 0.21999946
Iteration 38, loss = 0.21029120
Iteration 39, loss = 0.21052462
Iteration 40, loss = 0.20979183
Iteration 41, loss = 0.21030971
Iteration 42, loss = 0.21152747
Iteration 43, loss = 0.20781973
Iteration 44, loss = 0.20512341
Iteration 45, loss = 0.20070113
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.56088257
Iteration 2, loss = 0.34795533
Iteration 3, loss = 0.27616941
Iteration 4, loss = 0.24900818
Iteration 5, loss = 0.23596391
Iteration 6, loss = 0.22546875
Iteration 7, loss = 0.21391019
Iteration 8, loss = 0.20272534
Iteration 9, loss = 0.19930108
Iteration 10, loss = 0.19972146
Iteration 11, loss = 0.19316277
Iteration 12, loss = 0.17996586
Iteration 13, loss = 0.16823620
Iteration 14, loss = 0.17954988
Iteration 15, loss = 0.17992253
Iteration 16, loss = 0.18203637
Iteration 17, loss = 0.18021446
Iteration 18, loss = 0.17770805
Iteration 19, loss = 0.16581936
Iteration 20, loss = 0.16628430
Iteration 21, loss = 0.17381740
Iteration 22, loss = 0.16973730
Iteration 23, loss = 0.16659918
Iteration 24, loss = 0.16752675
Iteration 25, loss = 0.16518591
Iteration 26, loss = 0.16970358
Iteration 27, loss = 0.16786620
Iteration 28, loss = 0.16490932
Iteration 29, loss = 0.16306365
Iteration 30, loss = 0.16234130
Iteration 31, loss = 0.15892941
Iteration 32, loss = 0.15687700
Iteration 33, loss = 0.16096978
Iteration 34, loss = 0.16233578
Iteration 35, loss = 0.16151049
Iteration 36, loss = 0.16188936
Iteration 37, loss = 0.16193366
Iteration 38, loss = 0.17447813
Iteration 39, loss = 0.17561558
Iteration 40, loss = 0.17274797
Iteration 41, loss = 0.17415508
Iteration 42, loss = 0.17327726
Iteration 43, loss = 0.17352926
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.62416315
Iteration 2, loss = 0.38483381
Iteration 3, loss = 0.28688073
Iteration 4, loss = 0.24795950
Iteration 5, loss = 0.23241968
Iteration 6, loss = 0.22829222
Iteration 7, loss = 0.23966227
Iteration 8, loss = 0.22603081
Iteration 9, loss = 0.21238825
Iteration 10, loss = 0.19929557
Iteration 11, loss = 0.20025241
Iteration 12, loss = 0.21576466
Iteration 13, loss = 0.20937758
Iteration 14, loss = 0.20302283
Iteration 15, loss = 0.19584228
Iteration 16, loss = 0.19822068
Iteration 17, loss = 0.19244056
Iteration 18, loss = 0.19601569
Iteration 19, loss = 0.19371681
Iteration 20, loss = 0.19410357
Iteration 21, loss = 0.19539018
Iteration 22, loss = 0.19344855
Iteration 23, loss = 0.19391675
Iteration 24, loss = 0.19559239
Iteration 25, loss = 0.18941386
Iteration 26, loss = 0.18596962
Iteration 27, loss = 0.19189762
Iteration 28, loss = 0.19387514
Iteration 29, loss = 0.19372970
Iteration 30, loss = 0.19309322
Iteration 31, loss = 0.19088484
Iteration 32, loss = 0.18771638
Iteration 33, loss = 0.18692152
Iteration 34, loss = 0.18547591
Iteration 35, loss = 0.18571551
Iteration 36, loss = 0.18475791
Iteration 37, loss = 0.18478058
Iteration 38, loss = 0.18459719
Iteration 39, loss = 0.18393320
Iteration 40, loss = 0.18124447
Iteration 41, loss = 0.18107488
Iteration 42, loss = 0.17982793
Iteration 43, loss = 0.18024522
Iteration 44, loss = 0.17916550
Iteration 45, loss = 0.17743944
Iteration 46, loss = 0.17645632
Iteration 47, loss = 0.17550033
Iteration 48, loss = 0.17546233
Iteration 49, loss = 0.17536822
Iteration 50, loss = 0.17457018
Iteration 51, loss = 0.17385557
Iteration 52, loss = 0.17335962
Iteration 53, loss = 0.17211036
Iteration 54, loss = 0.17223733
Iteration 55, loss = 0.17196998
Iteration 56, loss = 0.17153640
Iteration 57, loss = 0.17029319
Iteration 58, loss = 0.17055904
Iteration 59, loss = 0.17137219
Iteration 60, loss = 0.17089904
Iteration 61, loss = 0.17054678
Iteration 62, loss = 0.16971355
Iteration 63, loss = 0.16894049
Iteration 64, loss = 0.16917003
Iteration 65, loss = 0.16888559
Iteration 66, loss = 0.16859344
Iteration 67, loss = 0.16648026
Iteration 68, loss = 0.16843013
Iteration 69, loss = 0.16765668
Iteration 70, loss = 0.16665409
Iteration 71, loss = 0.16630494
Iteration 72, loss = 0.16645411
Iteration 73, loss = 0.16295621
Iteration 74, loss = 0.16154519
Iteration 75, loss = 0.16079829
Iteration 76, loss = 0.15922234
Iteration 77, loss = 0.15291141
Iteration 78, loss = 0.16689455
Iteration 79, loss = 0.17381984
Iteration 80, loss = 0.17442720
Iteration 81, loss = 0.17379883
Iteration 82, loss = 0.17415862
Iteration 83, loss = 0.17178597
Iteration 84, loss = 0.17616667
Iteration 85, loss = 0.17883920
Iteration 86, loss = 0.18088489
Iteration 87, loss = 0.17703646
Iteration 88, loss = 0.17415587
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.33476377
Iteration 2, loss = 0.24776767
Iteration 3, loss = 0.22554985
Iteration 4, loss = 0.19614280
Iteration 5, loss = 0.18820258
Iteration 6, loss = 0.18090340
Iteration 7, loss = 0.18272887
Iteration 8, loss = 0.18023478
Iteration 9, loss = 0.17110713
Iteration 10, loss = 0.16518123
Iteration 11, loss = 0.16906642
Iteration 12, loss = 0.17481477
Iteration 13, loss = 0.16362408
Iteration 14, loss = 0.16055516
Iteration 15, loss = 0.16868558
Iteration 16, loss = 0.16025830
Iteration 17, loss = 0.15891560
Iteration 18, loss = 0.15803069
Iteration 19, loss = 0.15545781
Iteration 20, loss = 0.16249112
Iteration 21, loss = 0.15898930
Iteration 22, loss = 0.16590241
Iteration 23, loss = 0.16270869
Iteration 24, loss = 0.16328584
Iteration 25, loss = 0.16551971
Iteration 26, loss = 0.16672126
Iteration 27, loss = 0.17595467
Iteration 28, loss = 0.17430446
Iteration 29, loss = 0.16961482
Iteration 30, loss = 0.16648669
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.58480842
Iteration 2, loss = 0.38169480
Iteration 3, loss = 0.30710411
Iteration 4, loss = 0.26595074
Iteration 5, loss = 0.22868766
Iteration 6, loss = 0.20556775
Iteration 7, loss = 0.20114613
Iteration 8, loss = 0.21197792
Iteration 9, loss = 0.19684558
Iteration 10, loss = 0.19199444
Iteration 11, loss = 0.19506754
Iteration 12, loss = 0.19456441
Iteration 13, loss = 0.19638622
Iteration 14, loss = 0.19071658
Iteration 15, loss = 0.18501486
Iteration 16, loss = 0.18823246
Iteration 17, loss = 0.18694299
Iteration 18, loss = 0.18048749
Iteration 19, loss = 0.17425035
Iteration 20, loss = 0.17261745
Iteration 21, loss = 0.17167200
Iteration 22, loss = 0.17524502
Iteration 23, loss = 0.17097294
Iteration 24, loss = 0.16222679
Iteration 25, loss = 0.15996655
Iteration 26, loss = 0.15812899
Iteration 27, loss = 0.15678140
Iteration 28, loss = 0.15500565
Iteration 29, loss = 0.15508819
Iteration 30, loss = 0.15454167
Iteration 31, loss = 0.15335677
Iteration 32, loss = 0.15166623
Iteration 33, loss = 0.15012194
Iteration 34, loss = 0.15020795
Iteration 35, loss = 0.14953810
Iteration 36, loss = 0.15004125
Iteration 37, loss = 0.15665151
Iteration 38, loss = 0.15530328
Iteration 39, loss = 0.15279463
Iteration 40, loss = 0.15166045
Iteration 41, loss = 0.16810126
Iteration 42, loss = 0.16201877
Iteration 43, loss = 0.16153922
Iteration 44, loss = 0.15995607
Iteration 45, loss = 0.15704946
Iteration 46, loss = 0.15640944
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.60221561
Iteration 2, loss = 0.35266016
Iteration 3, loss = 0.27422098
Iteration 4, loss = 0.25620616
Iteration 5, loss = 0.23578621
Iteration 6, loss = 0.21328128
Iteration 7, loss = 0.20210926
Iteration 8, loss = 0.18202283
Iteration 9, loss = 0.17699237
Iteration 10, loss = 0.17762077
Iteration 11, loss = 0.16820942
Iteration 12, loss = 0.16787169
Iteration 13, loss = 0.15872500
Iteration 14, loss = 0.15641600
Iteration 15, loss = 0.16074611
Iteration 16, loss = 0.16047008
Iteration 17, loss = 0.15911033
Iteration 18, loss = 0.15649176
Iteration 19, loss = 0.15634741
Iteration 20, loss = 0.15489693
Iteration 21, loss = 0.15468128
Iteration 22, loss = 0.15380183
Iteration 23, loss = 0.15001051
Iteration 24, loss = 0.14576848
Iteration 25, loss = 0.14347451
Iteration 26, loss = 0.14580179
Iteration 27, loss = 0.14060102
Iteration 28, loss = 0.14385619
Iteration 29, loss = 0.15764622
Iteration 30, loss = 0.15202482
Iteration 31, loss = 0.14950054
Iteration 32, loss = 0.14816535
Iteration 33, loss = 0.15014693
Iteration 34, loss = 0.15002863
Iteration 35, loss = 0.14787261
Iteration 36, loss = 0.14818513
Iteration 37, loss = 0.14673505
Iteration 38, loss = 0.13444432
Iteration 39, loss = 0.13836621
Iteration 40, loss = 0.13658948
Iteration 41, loss = 0.13562039
Iteration 42, loss = 0.13493244
Iteration 43, loss = 0.13377614
Iteration 44, loss = 0.13413473
Iteration 45, loss = 0.13501202
Iteration 46, loss = 0.13608736
Iteration 47, loss = 0.14983714
Iteration 48, loss = 0.14437469
Iteration 49, loss = 0.14324314
Iteration 50, loss = 0.14409493
Iteration 51, loss = 0.14312774
Iteration 52, loss = 0.14166748
Iteration 53, loss = 0.14061907
Iteration 54, loss = 0.13983915
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.47148319
Iteration 2, loss = 0.26311027
Iteration 3, loss = 0.24118466
Iteration 4, loss = 0.19888759
Iteration 5, loss = 0.18143779
Iteration 6, loss = 0.16525603
Iteration 7, loss = 0.14790417
Iteration 8, loss = 0.15305950
Iteration 9, loss = 0.15541569
Iteration 10, loss = 0.14812166
Iteration 11, loss = 0.14951291
Iteration 12, loss = 0.15066215
Iteration 13, loss = 0.15147692
Iteration 14, loss = 0.14368792
Iteration 15, loss = 0.13597395
Iteration 16, loss = 0.13676883
Iteration 17, loss = 0.13572082
Iteration 18, loss = 0.13724049
Iteration 19, loss = 0.13171226
Iteration 20, loss = 0.12912779
Iteration 21, loss = 0.12638098
Iteration 22, loss = 0.12280462
Iteration 23, loss = 0.12078441
Iteration 24, loss = 0.11925097
Iteration 25, loss = 0.11792827
Iteration 26, loss = 0.11837885
Iteration 27, loss = 0.12034570
Iteration 28, loss = 0.11762627
Iteration 29, loss = 0.11761871
Iteration 30, loss = 0.13836466
Iteration 31, loss = 0.12702254
Iteration 32, loss = 0.13356007
Iteration 33, loss = 0.14885843
Iteration 34, loss = 0.14910649
Iteration 35, loss = 0.14808739
Iteration 36, loss = 0.14318623
Iteration 37, loss = 0.13968534
Iteration 38, loss = 0.14102250
Iteration 39, loss = 0.14144031
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.45504126
Iteration 2, loss = 0.21759091
Iteration 3, loss = 0.17817746
Iteration 4, loss = 0.15828771
Iteration 5, loss = 0.18524618
Iteration 6, loss = 0.17036862
Iteration 7, loss = 0.17099758
Iteration 8, loss = 0.17987786
Iteration 9, loss = 0.17330031
Iteration 10, loss = 0.16240734
Iteration 11, loss = 0.16154825
Iteration 12, loss = 0.13627829
Iteration 13, loss = 0.12838291
Iteration 14, loss = 0.12927007
Iteration 15, loss = 0.12375952
Iteration 16, loss = 0.12069627
Iteration 17, loss = 0.11885326
Iteration 18, loss = 0.11863847
Iteration 19, loss = 0.11706404
Iteration 20, loss = 0.11938661
Iteration 21, loss = 0.11692679
Iteration 22, loss = 0.11566376
Iteration 23, loss = 0.11521764
Iteration 24, loss = 0.11307288
Iteration 25, loss = 0.11267652
Iteration 26, loss = 0.11539151
Iteration 27, loss = 0.12057270
Iteration 28, loss = 0.11539489
Iteration 29, loss = 0.11584106
Iteration 30, loss = 0.11348479
Iteration 31, loss = 0.11295247
Iteration 32, loss = 0.11234336
Iteration 33, loss = 0.11108607
Iteration 34, loss = 0.13294885
Iteration 35, loss = 0.18194671
Iteration 36, loss = 0.13557687
Iteration 37, loss = 0.12662282
Iteration 38, loss = 0.15553370
Iteration 39, loss = 0.15227583
Iteration 40, loss = 0.14618311
Iteration 41, loss = 0.14977812
Iteration 42, loss = 0.15439532
Iteration 43, loss = 0.15753846
Iteration 44, loss = 0.15539619
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.36691738
Iteration 2, loss = 0.23320675
Iteration 3, loss = 0.19234872
Iteration 4, loss = 0.17452189
Iteration 5, loss = 0.15961064
Iteration 6, loss = 0.13823681
Iteration 7, loss = 0.17157080
Iteration 8, loss = 0.16696571
Iteration 9, loss = 0.18033327
Iteration 10, loss = 0.15711499
Iteration 11, loss = 0.16376290
Iteration 12, loss = 0.15728619
Iteration 13, loss = 0.14629883
Iteration 14, loss = 0.12904520
Iteration 15, loss = 0.12362205
Iteration 16, loss = 0.12181026
Iteration 17, loss = 0.11672437
Iteration 18, loss = 0.11262812
Iteration 19, loss = 0.11111706
Iteration 20, loss = 0.11297553
Iteration 21, loss = 0.12319407
Iteration 22, loss = 0.15711828
Iteration 23, loss = 0.14618701
Iteration 24, loss = 0.14140151
Iteration 25, loss = 0.13956929
Iteration 26, loss = 0.14856888
Iteration 27, loss = 0.14627802
Iteration 28, loss = 0.12707193
Iteration 29, loss = 0.11844441
Iteration 30, loss = 0.13399090
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.34882077
Iteration 2, loss = 0.20924233
Iteration 3, loss = 0.22627896
Iteration 4, loss = 0.20814672
Iteration 5, loss = 0.18900998
Iteration 6, loss = 0.20920099
Iteration 7, loss = 0.17113838
Iteration 8, loss = 0.15500389
Iteration 9, loss = 0.15668702
Iteration 10, loss = 0.17528176
Iteration 11, loss = 0.16136170
Iteration 12, loss = 0.15477669
Iteration 13, loss = 0.15804086
Iteration 14, loss = 0.14038111
Iteration 15, loss = 0.13967180
Iteration 16, loss = 0.14846809
Iteration 17, loss = 0.14610776
Iteration 18, loss = 0.13692814
Iteration 19, loss = 0.14538420
Iteration 20, loss = 0.14099154
Iteration 21, loss = 0.13463911
Iteration 22, loss = 0.13535336
Iteration 23, loss = 0.13233931
Iteration 24, loss = 0.13051641
Iteration 25, loss = 0.12908219
Iteration 26, loss = 0.12903651
Iteration 27, loss = 0.13010772
Iteration 28, loss = 0.13180346
Iteration 29, loss = 0.14488413
Iteration 30, loss = 0.14333863
Iteration 31, loss = 0.14268898
Iteration 32, loss = 0.13835797
Iteration 33, loss = 0.13987490
Iteration 34, loss = 0.14549425
Iteration 35, loss = 0.13880603
Iteration 36, loss = 0.16166552
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.42869900
Iteration 2, loss = 0.22401734
Iteration 3, loss = 0.19098629
Iteration 4, loss = 0.19006475
Iteration 5, loss = 0.16056505
Iteration 6, loss = 0.15385733
Iteration 7, loss = 0.15160353
Iteration 8, loss = 0.15138644
Iteration 9, loss = 0.16079755
Iteration 10, loss = 0.15753186
Iteration 11, loss = 0.14803149
Iteration 12, loss = 0.19997180
Iteration 13, loss = 0.20943316
Iteration 14, loss = 0.19393260
Iteration 15, loss = 0.18452144
Iteration 16, loss = 0.17774580
Iteration 17, loss = 0.17806945
Iteration 18, loss = 0.17370790
Iteration 19, loss = 0.16752567
Iteration 20, loss = 0.16085959
Iteration 21, loss = 0.16224404
Iteration 22, loss = 0.16133257
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.67406779
Iteration 2, loss = 0.66573657
Iteration 3, loss = 0.66216469
Iteration 4, loss = 0.65938992
Iteration 5, loss = 0.65668973
Iteration 6, loss = 0.65338373
Iteration 7, loss = 0.65006971
Iteration 8, loss = 0.64669716
Iteration 9, loss = 0.64486268
Iteration 10, loss = 0.65128761
Iteration 11, loss = 0.68602351
Iteration 12, loss = 0.68348462
Iteration 13, loss = 0.68138199
Iteration 14, loss = 0.67938628
Iteration 15, loss = 0.67738844
Iteration 16, loss = 0.67622733
Iteration 17, loss = 0.67510938
Iteration 18, loss = 0.67289729
Iteration 19, loss = 0.67115808
Iteration 20, loss = 0.67075512
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.74794584
Iteration 2, loss = 0.67541839
Iteration 3, loss = 0.65573770
Iteration 4, loss = 0.64034590
Iteration 5, loss = 0.63397320
Iteration 6, loss = 0.62279820
Iteration 7, loss = 0.60854822
Iteration 8, loss = 0.60415266
Iteration 9, loss = 0.62916973
Iteration 10, loss = 0.62344638
Iteration 11, loss = 0.62165140
Iteration 12, loss = 0.61990606
Iteration 13, loss = 0.61821320
Iteration 14, loss = 0.61656174
Iteration 15, loss = 0.61499323
Iteration 16, loss = 0.61347562
Iteration 17, loss = 0.61201600
Iteration 18, loss = 0.61064204
Iteration 19, loss = 0.60933071
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.71361727
Iteration 2, loss = 0.71024252
Iteration 3, loss = 0.70860905
Iteration 4, loss = 0.70699496
Iteration 5, loss = 0.70544981
Iteration 6, loss = 0.70065400
Iteration 7, loss = 0.69022939
Iteration 8, loss = 0.65008058
Iteration 9, loss = 0.62498478
Iteration 10, loss = 0.61531123
Iteration 11, loss = 0.62404107
Iteration 12, loss = 0.61961573
Iteration 13, loss = 0.61538683
Iteration 14, loss = 0.61148453
Iteration 15, loss = 0.60590745
Iteration 16, loss = 0.63558071
Iteration 17, loss = 0.64536131
Iteration 18, loss = 0.64297051
Iteration 19, loss = 0.64203713
Iteration 20, loss = 0.64145521
Iteration 21, loss = 0.64091366
Iteration 22, loss = 0.64043646
Iteration 23, loss = 0.63994317
Iteration 24, loss = 0.63947321
Iteration 25, loss = 0.63901302
Iteration 26, loss = 0.63858913
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.77366796
Iteration 2, loss = 0.75242944
Iteration 3, loss = 0.73458218
Iteration 4, loss = 0.72157726
Iteration 5, loss = 0.71157420
Iteration 6, loss = 0.70539586
Iteration 7, loss = 0.69825141
Iteration 8, loss = 0.68999030
Iteration 9, loss = 0.68219988
Iteration 10, loss = 0.67490519
Iteration 11, loss = 0.66792491
Iteration 12, loss = 0.66992161
Iteration 13, loss = 0.66588255
Iteration 14, loss = 0.66192348
Iteration 15, loss = 0.65797617
Iteration 16, loss = 0.65407622
Iteration 17, loss = 0.65017699
Iteration 18, loss = 0.64635904
Iteration 19, loss = 0.64255525
Iteration 20, loss = 0.63881015
Iteration 21, loss = 0.63512196
Iteration 22, loss = 0.63147889
Iteration 23, loss = 0.62789601
Iteration 24, loss = 0.62439309
Iteration 25, loss = 0.61350329
Iteration 26, loss = 0.58412102
Iteration 27, loss = 0.57571435
Iteration 28, loss = 0.56895095
Iteration 29, loss = 0.56515662
Iteration 30, loss = 0.55899185
Iteration 31, loss = 0.55311605
Iteration 32, loss = 0.54744651
Iteration 33, loss = 0.54195682
Iteration 34, loss = 0.53668572
Iteration 35, loss = 0.53158763
Iteration 36, loss = 0.52665787
Iteration 37, loss = 0.52190141
Iteration 38, loss = 0.51695021
Iteration 39, loss = 0.51223046
Iteration 40, loss = 0.50465149
Iteration 41, loss = 0.50031783
Iteration 42, loss = 0.50626030
Iteration 43, loss = 0.49413961
Iteration 44, loss = 0.49500644
Iteration 45, loss = 0.50972009
Iteration 46, loss = 0.48547491
Iteration 47, loss = 0.48190518
Iteration 48, loss = 0.47843510
Iteration 49, loss = 0.47300439
Iteration 50, loss = 0.46958037
Iteration 51, loss = 0.46639530
Iteration 52, loss = 0.46329687
Iteration 53, loss = 0.46029449
Iteration 54, loss = 0.45738209
Iteration 55, loss = 0.45452907
Iteration 56, loss = 0.45179327
Iteration 57, loss = 0.44912476
Iteration 58, loss = 0.44651351
Iteration 59, loss = 0.44397762
Iteration 60, loss = 0.44152350
Iteration 61, loss = 0.43913731
Iteration 62, loss = 0.43682029
Iteration 63, loss = 0.43456163
Iteration 64, loss = 0.43236441
Iteration 65, loss = 0.43023661
Iteration 66, loss = 0.42814774
Iteration 67, loss = 0.42610929
Iteration 68, loss = 0.42412662
Iteration 69, loss = 0.42221179
Iteration 70, loss = 0.42036497
Iteration 71, loss = 0.41853381
Iteration 72, loss = 0.41678901
Iteration 73, loss = 0.41506594
Iteration 74, loss = 0.41340459
Iteration 75, loss = 0.41177092
Iteration 76, loss = 0.41021263
Iteration 77, loss = 0.40866301
Iteration 78, loss = 0.40717276
Iteration 79, loss = 0.40571994
Iteration 80, loss = 0.40430937
Iteration 81, loss = 0.40293016
Iteration 82, loss = 0.40159940
Iteration 83, loss = 0.40029317
Iteration 84, loss = 0.39903972
Iteration 85, loss = 0.39779322
Iteration 86, loss = 0.39659350
Iteration 87, loss = 0.39542969
Iteration 88, loss = 0.39430766
Iteration 89, loss = 0.39319293
Iteration 90, loss = 0.39211907
Iteration 91, loss = 0.39106799
Iteration 92, loss = 0.39005261
Iteration 93, loss = 0.38906970
Iteration 94, loss = 0.38810072
Iteration 95, loss = 0.38717517
Iteration 96, loss = 0.38626251
Iteration 97, loss = 0.38538584
Iteration 98, loss = 0.38455447
Iteration 99, loss = 0.38370451
Iteration 100, loss = 0.38290354
Iteration 101, loss = 0.38211412
Iteration 102, loss = 0.38135100
Iteration 103, loss = 0.38061949
Iteration 104, loss = 0.37989447
Iteration 105, loss = 0.37918805
Iteration 106, loss = 0.37850435
Iteration 107, loss = 0.37784893
Iteration 108, loss = 0.37721406
Iteration 109, loss = 0.37659466
Iteration 110, loss = 0.37598358
Iteration 111, loss = 0.37538558
Iteration 112, loss = 0.37481779
Iteration 113, loss = 0.37426734
Iteration 114, loss = 0.37373272
Iteration 115, loss = 0.37318777
Iteration 116, loss = 0.37269546
Iteration 117, loss = 0.37221109
Iteration 118, loss = 0.37169780
Iteration 119, loss = 0.37123925
Iteration 120, loss = 0.37077168
Iteration 121, loss = 0.37033082
Iteration 122, loss = 0.36988913
Iteration 123, loss = 0.36948547
Iteration 124, loss = 0.36906389
Iteration 125, loss = 0.36866544
Iteration 126, loss = 0.36828333
Iteration 127, loss = 0.36793474
Iteration 128, loss = 0.36755203
Iteration 129, loss = 0.36721059
Iteration 130, loss = 0.36686019
Iteration 131, loss = 0.36654298
Iteration 132, loss = 0.36619743
Iteration 133, loss = 0.36589040
Iteration 134, loss = 0.36559383
Iteration 135, loss = 0.36529148
Iteration 136, loss = 0.36499106
Iteration 137, loss = 0.36471222
Iteration 138, loss = 0.36445169
Iteration 139, loss = 0.36422768
Iteration 140, loss = 0.36394558
Iteration 141, loss = 0.36368282
Iteration 142, loss = 0.36345610
Iteration 143, loss = 0.36322240
Iteration 144, loss = 0.36300407
Iteration 145, loss = 0.36279330
Iteration 146, loss = 0.36261870
Iteration 147, loss = 0.36237071
Iteration 148, loss = 0.36216273
Iteration 149, loss = 0.36197705
Iteration 150, loss = 0.36179573
Iteration 151, loss = 0.36159988
Iteration 152, loss = 0.36143490
Iteration 153, loss = 0.52242354
Iteration 154, loss = 0.82055158
Iteration 155, loss = 0.76512956
Iteration 156, loss = 0.74258252
Iteration 157, loss = 0.72460928
Iteration 158, loss = 0.70915905
Iteration 159, loss = 0.69548617
Iteration 160, loss = 0.68345187
Iteration 161, loss = 0.67271215
Iteration 162, loss = 0.66305549
Iteration 163, loss = 0.65435041
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.70195306
Iteration 2, loss = 0.69658049
Iteration 3, loss = 0.69297882
Iteration 4, loss = 0.69018411
Iteration 5, loss = 0.68845395
Iteration 6, loss = 0.68727800
Iteration 7, loss = 0.68666937
Iteration 8, loss = 0.68629612
Iteration 9, loss = 0.68604968
Iteration 10, loss = 0.68581721
Iteration 11, loss = 0.68577595
Iteration 12, loss = 0.68572882
Iteration 13, loss = 0.68563550
Iteration 14, loss = 0.68559249
Iteration 15, loss = 0.68572167
Iteration 16, loss = 0.68554113
Iteration 17, loss = 0.68533539
Iteration 18, loss = 0.68538036
Iteration 19, loss = 0.69737062
Iteration 20, loss = 0.68902909
Iteration 21, loss = 0.68425886
Iteration 22, loss = 0.68204912
Iteration 23, loss = 0.68782738
Iteration 24, loss = 0.68589990
Iteration 25, loss = 0.68477573
Iteration 26, loss = 0.68387108
Iteration 27, loss = 0.68282488
Iteration 28, loss = 0.68589368
Iteration 29, loss = 0.68482479
Iteration 30, loss = 0.68377917
Iteration 31, loss = 0.68339860
Iteration 32, loss = 0.68335069
Iteration 33, loss = 0.68250589
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 18.05605728
Iteration 2, loss = 9.86577810
Iteration 3, loss = 6.00969223
Iteration 4, loss = 6.22909205
Iteration 5, loss = 6.21243272
Iteration 6, loss = 4.21724262
Iteration 7, loss = 3.85254756
Iteration 8, loss = 3.41791127
Iteration 9, loss = 3.58228293
Iteration 10, loss = 3.44378956
Iteration 11, loss = 3.04079680
Iteration 12, loss = 2.82882011
Iteration 13, loss = 2.78990155
Iteration 14, loss = 3.02278057
Iteration 15, loss = 2.90550082
Iteration 16, loss = 3.37156054
Iteration 17, loss = 2.96569747
Iteration 18, loss = 3.01543717
Iteration 19, loss = 2.84131584
Iteration 20, loss = 2.72658951
Iteration 21, loss = 2.78407024
Iteration 22, loss = 2.65666863
Iteration 23, loss = 2.69131402
Iteration 24, loss = 2.75155300
Iteration 25, loss = 2.68593212
Iteration 26, loss = 2.62999301
Iteration 27, loss = 2.82637752
Iteration 28, loss = 2.53977365
Iteration 29, loss = 2.82890724
Iteration 30, loss = 2.77561770
Iteration 31, loss = 2.79565820
Iteration 32, loss = 2.61875401
Iteration 33, loss = 2.41155230
Iteration 34, loss = 2.47910133
Iteration 35, loss = 2.37393212
Iteration 36, loss = 2.63249047
Iteration 37, loss = 2.43870006
Iteration 38, loss = 2.45214440
Iteration 39, loss = 2.37285221
Iteration 40, loss = 2.91037993
Iteration 41, loss = 2.29580638
Iteration 42, loss = 2.17887594
Iteration 43, loss = 2.48519973
Iteration 44, loss = 2.32413666
Iteration 45, loss = 2.48550691
Iteration 46, loss = 2.36841979
Iteration 47, loss = 2.66025511
Iteration 48, loss = 2.16323800
Iteration 49, loss = 2.13229022
Iteration 50, loss = 2.07478065
Iteration 51, loss = 2.06909782
Iteration 52, loss = 2.24597262
Iteration 53, loss = 2.17825440
Iteration 54, loss = 1.94003386
Iteration 55, loss = 1.97026085
Iteration 56, loss = 1.91770579
Iteration 57, loss = 2.16339061
Iteration 58, loss = 2.18723724
Iteration 59, loss = 2.02168175
Iteration 60, loss = 2.51758032
Iteration 61, loss = 1.91415451
Iteration 62, loss = 2.06336689
Iteration 63, loss = 2.18443440
Iteration 64, loss = 1.73873688
Iteration 65, loss = 2.07810551
Iteration 66, loss = 1.98967517
Iteration 67, loss = 1.71490403
Iteration 68, loss = 2.23662094
Iteration 69, loss = 1.91587446
Iteration 70, loss = 1.67582581
Iteration 71, loss = 2.08225584
Iteration 72, loss = 2.11695286
Iteration 73, loss = 1.94010534
Iteration 74, loss = 1.73326632
Iteration 75, loss = 1.84188842
Iteration 76, loss = 1.75410672
Iteration 77, loss = 1.52068992
Iteration 78, loss = 1.59902745
Iteration 79, loss = 2.07852835
Iteration 80, loss = 1.64382750
Iteration 81, loss = 1.60848311
Iteration 82, loss = 1.54094342
Iteration 83, loss = 2.03293836
Iteration 84, loss = 1.96062839
Iteration 85, loss = 2.39928628
Iteration 86, loss = 1.79282956
Iteration 87, loss = 1.61967077
Iteration 88, loss = 1.84538333
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 15.73711700
Iteration 2, loss = 12.86714785
Iteration 3, loss = 13.21580617
Iteration 4, loss = 13.45887713
Iteration 5, loss = 13.82134460
Iteration 6, loss = 13.46244409
Iteration 7, loss = 10.58299579
Iteration 8, loss = 8.85240349
Iteration 9, loss = 7.64496455
Iteration 10, loss = 4.27604983
Iteration 11, loss = 3.27178922
Iteration 12, loss = 3.81411329
Iteration 13, loss = 2.82970077
Iteration 14, loss = 3.09821140
Iteration 15, loss = 3.24559126
Iteration 16, loss = 2.58768978
Iteration 17, loss = 2.90610303
Iteration 18, loss = 2.69984403
Iteration 19, loss = 3.50888790
Iteration 20, loss = 2.68663271
Iteration 21, loss = 3.12718853
Iteration 22, loss = 2.90407475
Iteration 23, loss = 2.34094907
Iteration 24, loss = 3.38835354
Iteration 25, loss = 2.18294405
Iteration 26, loss = 2.98601621
Iteration 27, loss = 3.06557585
Iteration 28, loss = 3.34083208
Iteration 29, loss = 3.83646382
Iteration 30, loss = 2.82260195
Iteration 31, loss = 3.11980425
Iteration 32, loss = 2.63462576
Iteration 33, loss = 3.18392460
Iteration 34, loss = 3.23504052
Iteration 35, loss = 3.11787853
Iteration 36, loss = 2.43327991
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 17.52912366
Iteration 2, loss = 12.39933711
Iteration 3, loss = 9.13357089
Iteration 4, loss = 7.75054574
Iteration 5, loss = 5.79187643
Iteration 6, loss = 4.36732322
Iteration 7, loss = 3.22250210
Iteration 8, loss = 3.52886850
Iteration 9, loss = 3.66025769
Iteration 10, loss = 3.43693762
Iteration 11, loss = 3.31256648
Iteration 12, loss = 3.51545929
Iteration 13, loss = 3.24105671
Iteration 14, loss = 2.93650783
Iteration 15, loss = 3.31902879
Iteration 16, loss = 2.89673373
Iteration 17, loss = 2.67164393
Iteration 18, loss = 3.15112919
Iteration 19, loss = 2.81733531
Iteration 20, loss = 2.95686249
Iteration 21, loss = 2.78841814
Iteration 22, loss = 2.49214168
Iteration 23, loss = 2.59285024
Iteration 24, loss = 3.24890684
Iteration 25, loss = 2.58652360
Iteration 26, loss = 2.67518056
Iteration 27, loss = 2.80477529
Iteration 28, loss = 3.01517853
Iteration 29, loss = 2.86121710
Iteration 30, loss = 2.98597772
Iteration 31, loss = 2.70586230
Iteration 32, loss = 2.70230278
Iteration 33, loss = 3.31933145
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 18.06917590
Iteration 2, loss = 15.10099471
Iteration 3, loss = 11.21503055
Iteration 4, loss = 7.25909303
Iteration 5, loss = 6.38511934
Iteration 6, loss = 5.78930184
Iteration 7, loss = 5.50999799
Iteration 8, loss = 5.08062045
Iteration 9, loss = 4.71313277
Iteration 10, loss = 5.01927669
Iteration 11, loss = 4.62540591
Iteration 12, loss = 4.30033651
Iteration 13, loss = 4.78508206
Iteration 14, loss = 4.53901649
Iteration 15, loss = 3.94213438
Iteration 16, loss = 4.02652260
Iteration 17, loss = 3.89906703
Iteration 18, loss = 3.76718441
Iteration 19, loss = 3.93949697
Iteration 20, loss = 3.54225666
Iteration 21, loss = 3.53090763
Iteration 22, loss = 3.54770700
Iteration 23, loss = 3.66387038
Iteration 24, loss = 3.52468653
Iteration 25, loss = 3.37366017
Iteration 26, loss = 3.50968117
Iteration 27, loss = 3.41044222
Iteration 28, loss = 3.40962364
Iteration 29, loss = 3.25849413
Iteration 30, loss = 3.54597255
Iteration 31, loss = 3.68079638
Iteration 32, loss = 3.28692488
Iteration 33, loss = 3.13842162
Iteration 34, loss = 3.15253813
Iteration 35, loss = 2.92356948
Iteration 36, loss = 3.12159586
Iteration 37, loss = 3.23851847
Iteration 38, loss = 2.87144248
Iteration 39, loss = 2.82995218
Iteration 40, loss = 3.05046370
Iteration 41, loss = 3.06653901
Iteration 42, loss = 3.27710104
Iteration 43, loss = 3.06056981
Iteration 44, loss = 2.75790699
Iteration 45, loss = 3.05607421
Iteration 46, loss = 2.91755087
Iteration 47, loss = 3.13112666
Iteration 48, loss = 2.79819582
Iteration 49, loss = 2.61449275
Iteration 50, loss = 2.60857078
Iteration 51, loss = 3.25677585
Iteration 52, loss = 2.82635609
Iteration 53, loss = 2.66330456
Iteration 54, loss = 3.01655302
Iteration 55, loss = 3.09108045
Iteration 56, loss = 2.40737784
Iteration 57, loss = 2.75028796
Iteration 58, loss = 2.85250176
Iteration 59, loss = 2.81966927
Iteration 60, loss = 2.80305408
Iteration 61, loss = 2.96408848
Iteration 62, loss = 3.03490701
Iteration 63, loss = 2.90725570
Iteration 64, loss = 2.76750058
Iteration 65, loss = 2.77794917
Iteration 66, loss = 2.85035697
Iteration 67, loss = 2.89641966
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 27.29514797
Iteration 2, loss = 28.26219667
Iteration 3, loss = 25.79708338
Iteration 4, loss = 14.39585481
Iteration 5, loss = 13.27082395
Iteration 6, loss = 12.21862863
Iteration 7, loss = 11.69779342
Iteration 8, loss = 9.73231453
Iteration 9, loss = 7.87167616
Iteration 10, loss = 7.01844343
Iteration 11, loss = 7.05962605
Iteration 12, loss = 6.54608688
Iteration 13, loss = 6.02920578
Iteration 14, loss = 6.37398149
Iteration 15, loss = 5.77027924
Iteration 16, loss = 5.60641764
Iteration 17, loss = 5.57158568
Iteration 18, loss = 5.25559085
Iteration 19, loss = 5.21758298
Iteration 20, loss = 5.37178231
Iteration 21, loss = 5.70978417
Iteration 22, loss = 5.30973190
Iteration 23, loss = 5.38355406
Iteration 24, loss = 5.42927071
Iteration 25, loss = 5.82391553
Iteration 26, loss = 6.05619485
Iteration 27, loss = 5.62673824
Iteration 28, loss = 5.22107583
Iteration 29, loss = 5.41076170
Iteration 30, loss = 5.27580859
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 2.70907257
Iteration 2, loss = 2.68216325
Iteration 3, loss = 3.07208787
Iteration 4, loss = 3.47641551
Iteration 5, loss = 2.96167702
Iteration 6, loss = 3.41438682
Iteration 7, loss = 3.24879944
Iteration 8, loss = 3.10995407
Iteration 9, loss = 3.64208563
Iteration 10, loss = 3.18983568
Iteration 11, loss = 3.17546361
Iteration 12, loss = 3.25228957
Iteration 13, loss = 3.10227004
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 16.82949270
Iteration 2, loss = 12.64595004
Iteration 3, loss = 8.21051242
Iteration 4, loss = 6.94306096
Iteration 5, loss = 5.63139949
Iteration 6, loss = 5.41562394
Iteration 7, loss = 4.87426924
Iteration 8, loss = 4.27682922
Iteration 9, loss = 3.88261374
Iteration 10, loss = 3.93911539
Iteration 11, loss = 3.02798398
Iteration 12, loss = 3.51750849
Iteration 13, loss = 3.35893789
Iteration 14, loss = 3.39647139
Iteration 15, loss = 3.47068876
Iteration 16, loss = 2.81861421
Iteration 17, loss = 3.98279092
Iteration 18, loss = 3.46803163
Iteration 19, loss = 3.13605103
Iteration 20, loss = 2.43402850
Iteration 21, loss = 3.54944753
Iteration 22, loss = 3.13238657
Iteration 23, loss = 3.29952757
Iteration 24, loss = 3.02188775
Iteration 25, loss = 2.64785185
Iteration 26, loss = 2.90188372
Iteration 27, loss = 3.64863312
Iteration 28, loss = 3.86556314
Iteration 29, loss = 2.94854445
Iteration 30, loss = 2.92571657
Iteration 31, loss = 3.27844759
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 9.30826634
Iteration 2, loss = 9.85982617
Iteration 3, loss = 9.11984554
Iteration 4, loss = 8.52866673
Iteration 5, loss = 8.45042807
Iteration 6, loss = 8.01343854
Iteration 7, loss = 7.61216462
Iteration 8, loss = 7.39290173
Iteration 9, loss = 7.30554190
Iteration 10, loss = 6.71727737
Iteration 11, loss = 6.32356972
Iteration 12, loss = 6.67047926
Iteration 13, loss = 6.02560544
Iteration 14, loss = 5.75057732
Iteration 15, loss = 5.84144651
Iteration 16, loss = 5.59117432
Iteration 17, loss = 5.82450405
Iteration 18, loss = 5.37195629
Iteration 19, loss = 6.23662531
Iteration 20, loss = 5.44128633
Iteration 21, loss = 5.92127707
Iteration 22, loss = 5.58447744
Iteration 23, loss = 5.15328550
Iteration 24, loss = 6.54005612
Iteration 25, loss = 5.71277086
Iteration 26, loss = 5.73873288
Iteration 27, loss = 5.86412003
Iteration 28, loss = 5.46351426
Iteration 29, loss = 5.76088834
Iteration 30, loss = 4.28920703
Iteration 31, loss = 3.43045908
Iteration 32, loss = 4.63056637
Iteration 33, loss = 3.32935063
Iteration 34, loss = 3.54521638
Iteration 35, loss = 4.10769744
Iteration 36, loss = 3.48825525
Iteration 37, loss = 3.45747613
Iteration 38, loss = 3.61498231
Iteration 39, loss = 5.19257685
Iteration 40, loss = 3.71303210
Iteration 41, loss = 3.57300294
Iteration 42, loss = 3.46769189
Iteration 43, loss = 3.31347980
Iteration 44, loss = 3.52644414
Iteration 45, loss = 3.00761285
Iteration 46, loss = 3.03614153
Iteration 47, loss = 3.12147989
Iteration 48, loss = 3.23355408
Iteration 49, loss = 3.26626700
Iteration 50, loss = 3.13745698
Iteration 51, loss = 3.37842446
Iteration 52, loss = 3.10438518
Iteration 53, loss = 3.26138994
Iteration 54, loss = 3.22721891
Iteration 55, loss = 3.96363273
Iteration 56, loss = 2.86351114
Iteration 57, loss = 2.90517720
Iteration 58, loss = 3.01970034
Iteration 59, loss = 3.71883129
Iteration 60, loss = 3.01838505
Iteration 61, loss = 3.19320411
Iteration 62, loss = 2.86706145
Iteration 63, loss = 3.37450972
Iteration 64, loss = 3.47806764
Iteration 65, loss = 3.55934922
Iteration 66, loss = 3.55640719
Iteration 67, loss = 3.44391932
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 8.29056186
Iteration 2, loss = 6.90328180
Iteration 3, loss = 6.09076844
Iteration 4, loss = 4.13656673
Iteration 5, loss = 3.44609808
Iteration 6, loss = 3.56013213
Iteration 7, loss = 3.49394650
Iteration 8, loss = 3.47668336
Iteration 9, loss = 3.52953660
Iteration 10, loss = 3.47885373
Iteration 11, loss = 3.64622694
Iteration 12, loss = 3.75597302
Iteration 13, loss = 3.92098691
Iteration 14, loss = 3.66869972
Iteration 15, loss = 3.49577366
Iteration 16, loss = 3.59285433
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 20.62196486
Iteration 2, loss = 6.57501195
Iteration 3, loss = 2.63357391
Iteration 4, loss = 2.27209396
Iteration 5, loss = 2.72037095
Iteration 6, loss = 3.08733679
Iteration 7, loss = 3.02754919
Iteration 8, loss = 3.12837205
Iteration 9, loss = 2.98705814
Iteration 10, loss = 2.92992469
Iteration 11, loss = 2.78931378
Iteration 12, loss = 2.43608207
Iteration 13, loss = 2.67084765
Iteration 14, loss = 2.63769308
Iteration 15, loss = 2.57932603
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.88065916
Iteration 2, loss = 0.82103937
Iteration 3, loss = 0.79923524
Iteration 4, loss = 0.79167508
Iteration 5, loss = 0.80048344
Iteration 6, loss = 0.75385093
Iteration 7, loss = 0.71286139
Iteration 8, loss = 0.69340122
Iteration 9, loss = 0.68094259
Iteration 10, loss = 0.66983446
Iteration 11, loss = 0.65717416
Iteration 12, loss = 0.58612688
Iteration 13, loss = 0.59577243
Iteration 14, loss = 0.60378634
Iteration 15, loss = 0.63319454
Iteration 16, loss = 0.60488741
Iteration 17, loss = 0.59006935
Iteration 18, loss = 0.58556263
Iteration 19, loss = 0.57366827
Iteration 20, loss = 0.55874406
Iteration 21, loss = 0.57193843
Iteration 22, loss = 0.57234446
Iteration 23, loss = 0.56465855
Iteration 24, loss = 0.55545070
Iteration 25, loss = 0.54740012
Iteration 26, loss = 0.53986940
Iteration 27, loss = 0.53280827
Iteration 28, loss = 0.52013609
Iteration 29, loss = 0.51030722
Iteration 30, loss = 0.52923377
Iteration 31, loss = 0.51098354
Iteration 32, loss = 0.49055268
Iteration 33, loss = 0.47783289
Iteration 34, loss = 0.46957544
Iteration 35, loss = 0.46292958
Iteration 36, loss = 0.45614184
Iteration 37, loss = 0.44985900
Iteration 38, loss = 0.44354676
Iteration 39, loss = 0.43367140
Iteration 40, loss = 0.48299082
Iteration 41, loss = 0.47453677
Iteration 42, loss = 0.46938727
Iteration 43, loss = 0.46487906
Iteration 44, loss = 0.46068744
Iteration 45, loss = 0.45686564
Iteration 46, loss = 0.45328122
Iteration 47, loss = 0.44991026
Iteration 48, loss = 0.44676669
Iteration 49, loss = 0.44383888
Iteration 50, loss = 0.44330971
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.81790034
Iteration 2, loss = 0.66670357
Iteration 3, loss = 0.58899721
Iteration 4, loss = 0.53297482
Iteration 5, loss = 0.49846378
Iteration 6, loss = 0.47072456
Iteration 7, loss = 0.43589133
Iteration 8, loss = 0.41847833
Iteration 9, loss = 0.41818380
Iteration 10, loss = 0.44048745
Iteration 11, loss = 0.47486951
Iteration 12, loss = 0.44599773
Iteration 13, loss = 0.42991510
Iteration 14, loss = 0.41506694
Iteration 15, loss = 0.41172851
Iteration 16, loss = 0.39690805
Iteration 17, loss = 0.38784681
Iteration 18, loss = 0.37856384
Iteration 19, loss = 0.37451947
Iteration 20, loss = 0.37066457
Iteration 21, loss = 0.36685455
Iteration 22, loss = 0.35806035
Iteration 23, loss = 0.35757157
Iteration 24, loss = 0.35513690
Iteration 25, loss = 0.34892460
Iteration 26, loss = 0.33887106
Iteration 27, loss = 0.31378001
Iteration 28, loss = 0.31374886
Iteration 29, loss = 0.32510534
Iteration 30, loss = 0.31506961
Iteration 31, loss = 0.30831555
Iteration 32, loss = 0.29755731
Iteration 33, loss = 0.29142965
Iteration 34, loss = 0.28845566
Iteration 35, loss = 0.28537935
Iteration 36, loss = 0.28464898
Iteration 37, loss = 0.28705167
Iteration 38, loss = 0.28018066
Iteration 39, loss = 0.27671486
Iteration 40, loss = 0.27437364
Iteration 41, loss = 0.27252091
Iteration 42, loss = 0.27076791
Iteration 43, loss = 0.26913007
Iteration 44, loss = 0.26764457
Iteration 45, loss = 0.26528326
Iteration 46, loss = 0.26233963
Iteration 47, loss = 0.26108166
Iteration 48, loss = 0.26009051
Iteration 49, loss = 0.25907151
Iteration 50, loss = 0.25805296
Iteration 51, loss = 0.25670876
Iteration 52, loss = 0.25546228
Iteration 53, loss = 0.25462841
Iteration 54, loss = 0.25388177
Iteration 55, loss = 0.25306142
Iteration 56, loss = 0.25217167
Iteration 57, loss = 0.25080836
Iteration 58, loss = 0.25011872
Iteration 59, loss = 0.24948128
Iteration 60, loss = 0.24882641
Iteration 61, loss = 0.24807537
Iteration 62, loss = 0.24749917
Iteration 63, loss = 0.24694399
Iteration 64, loss = 0.24635093
Iteration 65, loss = 0.26192148
Iteration 66, loss = 0.29772065
Iteration 67, loss = 0.28865050
Iteration 68, loss = 0.28529732
Iteration 69, loss = 0.28358953
Iteration 70, loss = 0.28236769
Iteration 71, loss = 0.28147160
Iteration 72, loss = 0.28087977
Iteration 73, loss = 0.28611129
Iteration 74, loss = 0.28172588
Iteration 75, loss = 0.27937147
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.96538827
Iteration 2, loss = 0.88473706
Iteration 3, loss = 0.82364986
Iteration 4, loss = 0.74795078
Iteration 5, loss = 0.68530287
Iteration 6, loss = 0.63479801
Iteration 7, loss = 0.60744229
Iteration 8, loss = 0.57689587
Iteration 9, loss = 0.55165673
Iteration 10, loss = 0.53060887
Iteration 11, loss = 0.50897019
Iteration 12, loss = 0.48419143
Iteration 13, loss = 0.46622709
Iteration 14, loss = 0.61285104
Iteration 15, loss = 0.56661505
Iteration 16, loss = 0.53126473
Iteration 17, loss = 0.50472517
Iteration 18, loss = 0.49341012
Iteration 19, loss = 0.49407504
Iteration 20, loss = 0.45512624
Iteration 21, loss = 0.42246252
Iteration 22, loss = 0.39793609
Iteration 23, loss = 0.37646727
Iteration 24, loss = 0.36436313
Iteration 25, loss = 0.35586950
Iteration 26, loss = 0.35080234
Iteration 27, loss = 0.34272133
Iteration 28, loss = 0.33688076
Iteration 29, loss = 0.33132580
Iteration 30, loss = 0.30641798
Iteration 31, loss = 0.31528973
Iteration 32, loss = 0.30320352
Iteration 33, loss = 0.29195756
Iteration 34, loss = 0.28695676
Iteration 35, loss = 0.28113798
Iteration 36, loss = 0.34878091
Iteration 37, loss = 0.42707216
Iteration 38, loss = 0.35706068
Iteration 39, loss = 0.33016539
Iteration 40, loss = 0.31829616
Iteration 41, loss = 0.31001464
Iteration 42, loss = 0.29911798
Iteration 43, loss = 0.29145288
Iteration 44, loss = 0.28517698
Iteration 45, loss = 0.28139977
Iteration 46, loss = 0.27807037
Iteration 47, loss = 0.27384285
Iteration 48, loss = 0.26723114
Iteration 49, loss = 0.26444851
Iteration 50, loss = 0.26228755
Iteration 51, loss = 0.25965931
Iteration 52, loss = 0.25395059
Iteration 53, loss = 0.25158271
Iteration 54, loss = 0.26167938
Iteration 55, loss = 0.25927057
Iteration 56, loss = 0.25720090
Iteration 57, loss = 0.25372871
Iteration 58, loss = 0.24414560
Iteration 59, loss = 0.23756356
Iteration 60, loss = 0.23645982
Iteration 61, loss = 0.23548132
Iteration 62, loss = 0.23458588
Iteration 63, loss = 0.23372370
Iteration 64, loss = 0.23293894
Iteration 65, loss = 0.23216006
Iteration 66, loss = 0.42052580
Iteration 67, loss = 0.37374959
Iteration 68, loss = 0.35533571
Iteration 69, loss = 0.35044769
Iteration 70, loss = 0.34732939
Iteration 71, loss = 0.34528455
Iteration 72, loss = 0.34358406
Iteration 73, loss = 0.34251739
Iteration 74, loss = 0.34395698
Iteration 75, loss = 0.34255435
Iteration 76, loss = 0.33403103
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.85122884
Iteration 2, loss = 0.80131471
Iteration 3, loss = 0.77537661
Iteration 4, loss = 0.73839461
Iteration 5, loss = 0.65942329
Iteration 6, loss = 0.61100039
Iteration 7, loss = 0.59452775
Iteration 8, loss = 0.57144588
Iteration 9, loss = 0.55182986
Iteration 10, loss = 0.53726284
Iteration 11, loss = 0.53533346
Iteration 12, loss = 0.52592452
Iteration 13, loss = 0.51728321
Iteration 14, loss = 0.50862403
Iteration 15, loss = 0.48966801
Iteration 16, loss = 0.45424015
Iteration 17, loss = 0.43087740
Iteration 18, loss = 0.41118353
Iteration 19, loss = 0.39398781
Iteration 20, loss = 0.38426215
Iteration 21, loss = 0.37584156
Iteration 22, loss = 0.37498837
Iteration 23, loss = 0.37052393
Iteration 24, loss = 0.36443384
Iteration 25, loss = 0.35846124
Iteration 26, loss = 0.36220302
Iteration 27, loss = 0.37837513
Iteration 28, loss = 0.35288265
Iteration 29, loss = 0.34679766
Iteration 30, loss = 0.34290321
Iteration 31, loss = 0.32938119
Iteration 32, loss = 0.32467708
Iteration 33, loss = 0.32543841
Iteration 34, loss = 0.32304425
Iteration 35, loss = 0.31675508
Iteration 36, loss = 0.31291207
Iteration 37, loss = 0.31015654
Iteration 38, loss = 0.30777678
Iteration 39, loss = 0.30558471
Iteration 40, loss = 0.30350630
Iteration 41, loss = 0.30159002
Iteration 42, loss = 0.29976085
Iteration 43, loss = 0.29805955
Iteration 44, loss = 0.29637380
Iteration 45, loss = 0.29479503
Iteration 46, loss = 0.29332005
Iteration 47, loss = 0.29193285
Iteration 48, loss = 0.29056836
Iteration 49, loss = 0.28930417
Iteration 50, loss = 0.28811499
Iteration 51, loss = 0.28698304
Iteration 52, loss = 0.28536942
Iteration 53, loss = 0.28441104
Iteration 54, loss = 0.28334412
Iteration 55, loss = 0.28258949
Iteration 56, loss = 0.28169989
Iteration 57, loss = 0.28076260
Iteration 58, loss = 0.27992642
Iteration 59, loss = 0.27322611
Iteration 60, loss = 0.26653126
Iteration 61, loss = 0.26558515
Iteration 62, loss = 0.26445588
Iteration 63, loss = 0.26347572
Iteration 64, loss = 0.26237202
Iteration 65, loss = 0.26138058
Iteration 66, loss = 0.26037751
Iteration 67, loss = 0.25913415
Iteration 68, loss = 0.25827895
Iteration 69, loss = 0.25747507
Iteration 70, loss = 0.25669694
Iteration 71, loss = 0.25596314
Iteration 72, loss = 0.25532539
Iteration 73, loss = 0.25460374
Iteration 74, loss = 0.25393398
Iteration 75, loss = 0.25332052
Iteration 76, loss = 0.25275325
Iteration 77, loss = 0.25226183
Iteration 78, loss = 0.25160435
Iteration 79, loss = 0.25106892
Iteration 80, loss = 0.25058915
Iteration 81, loss = 0.25016410
Iteration 82, loss = 0.24966747
Iteration 83, loss = 0.24925002
Iteration 84, loss = 0.24899527
Iteration 85, loss = 0.24859624
Iteration 86, loss = 0.24821212
Iteration 87, loss = 0.27432775
Iteration 88, loss = 0.48107445
Iteration 89, loss = 0.44765176
Iteration 90, loss = 0.48239014
Iteration 91, loss = 0.49489907
Iteration 92, loss = 0.47942970
Iteration 93, loss = 0.47251154
Iteration 94, loss = 0.46773595
Iteration 95, loss = 0.46383749
Iteration 96, loss = 0.46013218
Iteration 97, loss = 0.83374117
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 1.06998469
Iteration 2, loss = 0.95181971
Iteration 3, loss = 0.85336804
Iteration 4, loss = 0.78854783
Iteration 5, loss = 0.72654040
Iteration 6, loss = 0.68423393
Iteration 7, loss = 0.65686298
Iteration 8, loss = 0.63085561
Iteration 9, loss = 0.60637225
Iteration 10, loss = 0.58484514
Iteration 11, loss = 0.56665152
Iteration 12, loss = 0.54862624
Iteration 13, loss = 0.53036826
Iteration 14, loss = 0.51483007
Iteration 15, loss = 0.49835467
Iteration 16, loss = 0.48718376
Iteration 17, loss = 0.47641378
Iteration 18, loss = 0.46761932
Iteration 19, loss = 0.46033120
Iteration 20, loss = 0.45390184
Iteration 21, loss = 0.44766837
Iteration 22, loss = 0.44129870
Iteration 23, loss = 0.45233097
Iteration 24, loss = 0.43052681
Iteration 25, loss = 0.41341041
Iteration 26, loss = 0.40529980
Iteration 27, loss = 0.39939411
Iteration 28, loss = 0.39338560
Iteration 29, loss = 0.38812381
Iteration 30, loss = 0.38330770
Iteration 31, loss = 0.37750832
Iteration 32, loss = 0.37208181
Iteration 33, loss = 0.36075412
Iteration 34, loss = 0.35485780
Iteration 35, loss = 0.34233210
Iteration 36, loss = 0.33821902
Iteration 37, loss = 0.33468471
Iteration 38, loss = 0.32961614
Iteration 39, loss = 0.32520513
Iteration 40, loss = 0.32217514
Iteration 41, loss = 0.31936434
Iteration 42, loss = 0.31673369
Iteration 43, loss = 0.32365199
Iteration 44, loss = 0.34295596
Iteration 45, loss = 0.33919805
Iteration 46, loss = 0.33631733
Iteration 47, loss = 0.33418599
Iteration 48, loss = 0.33493223
Iteration 49, loss = 0.33313121
Iteration 50, loss = 0.33116314
Iteration 51, loss = 0.32241524
Iteration 52, loss = 0.31707087
Iteration 53, loss = 0.31354885
Iteration 54, loss = 0.33460584
Iteration 55, loss = 0.30934392
Iteration 56, loss = 0.30712881
Iteration 57, loss = 0.30574215
Iteration 58, loss = 0.30272155
Iteration 59, loss = 0.30094315
Iteration 60, loss = 0.29819691
Iteration 61, loss = 0.28461014
Iteration 62, loss = 0.28087423
Iteration 63, loss = 0.27828696
Iteration 64, loss = 0.27591852
Iteration 65, loss = 0.27380089
Iteration 66, loss = 0.27172849
Iteration 67, loss = 0.26978480
Iteration 68, loss = 0.26766115
Iteration 69, loss = 0.26582326
Iteration 70, loss = 0.26421833
Iteration 71, loss = 0.26263085
Iteration 72, loss = 0.26119611
Iteration 73, loss = 0.25973378
Iteration 74, loss = 0.25830445
Iteration 75, loss = 0.25706355
Iteration 76, loss = 0.25586579
Iteration 77, loss = 0.25475448
Iteration 78, loss = 0.25365406
Iteration 79, loss = 0.25266106
Iteration 80, loss = 0.25167231
Iteration 81, loss = 0.25076227
Iteration 82, loss = 0.24987583
Iteration 83, loss = 0.24897314
Iteration 84, loss = 0.24781305
Iteration 85, loss = 0.24694028
Iteration 86, loss = 0.24618729
Iteration 87, loss = 0.24549016
Iteration 88, loss = 0.24494220
Iteration 89, loss = 0.24427524
Iteration 90, loss = 0.24369149
Iteration 91, loss = 0.24312442
Iteration 92, loss = 0.24259897
Iteration 93, loss = 0.24203711
Iteration 94, loss = 0.24157783
Iteration 95, loss = 0.24109440
Iteration 96, loss = 0.24061904
Iteration 97, loss = 0.24015742
Iteration 98, loss = 0.23975684
Iteration 99, loss = 0.23922769
Iteration 100, loss = 0.23814601
Iteration 101, loss = 0.23787946
Iteration 102, loss = 0.23673743
Iteration 103, loss = 0.23520604
Iteration 104, loss = 0.34864101
Iteration 105, loss = 0.41901634
Iteration 106, loss = 0.35144357
Iteration 107, loss = 0.32912109
Iteration 108, loss = 0.31956894
Iteration 109, loss = 0.31595831
Iteration 110, loss = 0.31131952
Iteration 111, loss = 0.30604708
Iteration 112, loss = 0.30009920
Iteration 113, loss = 0.29630194
Iteration 114, loss = 0.29325131
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 13.46107077
Iteration 2, loss = 14.41825404
Iteration 3, loss = 12.27905929
Iteration 4, loss = 9.69797445
Iteration 5, loss = 6.39212943
Iteration 6, loss = 4.51147179
Iteration 7, loss = 3.31493735
Iteration 8, loss = 2.63411952
Iteration 9, loss = 3.46534097
Iteration 10, loss = 3.07872735
Iteration 11, loss = 3.09060326
Iteration 12, loss = 2.80709630
Iteration 13, loss = 2.59100003
Iteration 14, loss = 2.24596682
Iteration 15, loss = 3.61603846
Iteration 16, loss = 2.46487657
Iteration 17, loss = 2.64714605
Iteration 18, loss = 2.33950931
Iteration 19, loss = 2.69407704
Iteration 20, loss = 2.37446359
Iteration 21, loss = 2.16917872
Iteration 22, loss = 2.46122986
Iteration 23, loss = 2.43799394
Iteration 24, loss = 3.14318167
Iteration 25, loss = 2.33052466
Iteration 26, loss = 2.79772710
Iteration 27, loss = 3.39296691
Iteration 28, loss = 2.68972014
Iteration 29, loss = 2.27850812
Iteration 30, loss = 2.54916111
Iteration 31, loss = 2.54314206
Iteration 32, loss = 2.77397523
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 15.19590460
Iteration 2, loss = 9.17204284
Iteration 3, loss = 8.80508684
Iteration 4, loss = 7.74689145
Iteration 5, loss = 6.95002788
Iteration 6, loss = 5.22283655
Iteration 7, loss = 4.89354814
Iteration 8, loss = 4.63974277
Iteration 9, loss = 4.01397990
Iteration 10, loss = 3.70439738
Iteration 11, loss = 3.62117262
Iteration 12, loss = 3.46530666
Iteration 13, loss = 3.41541499
Iteration 14, loss = 3.83005292
Iteration 15, loss = 3.34752728
Iteration 16, loss = 3.43478750
Iteration 17, loss = 3.29076926
Iteration 18, loss = 2.87120757
Iteration 19, loss = 3.55702454
Iteration 20, loss = 3.64597497
Iteration 21, loss = 2.86223609
Iteration 22, loss = 3.06294990
Iteration 23, loss = 3.33215349
Iteration 24, loss = 3.54987118
Iteration 25, loss = 2.91072791
Iteration 26, loss = 3.05183735
Iteration 27, loss = 3.54201425
Iteration 28, loss = 3.04566785
Iteration 29, loss = 2.64838260
Iteration 30, loss = 3.72798491
Iteration 31, loss = 3.20131184
Iteration 32, loss = 3.00057332
Iteration 33, loss = 3.47050898
Iteration 34, loss = 3.09956998
Iteration 35, loss = 3.24912781
Iteration 36, loss = 3.76642634
Iteration 37, loss = 3.25996980
Iteration 38, loss = 3.18709100
Iteration 39, loss = 3.39278159
Iteration 40, loss = 3.05812275
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 12.30277972
Iteration 2, loss = 8.71719398
Iteration 3, loss = 5.69340852
Iteration 4, loss = 4.78753193
Iteration 5, loss = 4.22845770
Iteration 6, loss = 4.30400063
Iteration 7, loss = 3.99068736
Iteration 8, loss = 4.01152766
Iteration 9, loss = 3.95626191
Iteration 10, loss = 3.83152736
Iteration 11, loss = 3.67948306
Iteration 12, loss = 3.66794854
Iteration 13, loss = 3.54268288
Iteration 14, loss = 3.34171708
Iteration 15, loss = 3.53940485
Iteration 16, loss = 3.28332899
Iteration 17, loss = 3.50735311
Iteration 18, loss = 3.34567152
Iteration 19, loss = 3.41205989
Iteration 20, loss = 3.40737858
Iteration 21, loss = 3.08024812
Iteration 22, loss = 3.35905687
Iteration 23, loss = 2.93104803
Iteration 24, loss = 3.15814488
Iteration 25, loss = 2.96314591
Iteration 26, loss = 3.00830415
Iteration 27, loss = 3.06566863
Iteration 28, loss = 3.08195448
Iteration 29, loss = 2.97116109
Iteration 30, loss = 3.10393142
Iteration 31, loss = 2.72132634
Iteration 32, loss = 3.17936889
Iteration 33, loss = 3.07341046
Iteration 34, loss = 3.07256281
Iteration 35, loss = 2.88830918
Iteration 36, loss = 3.02026917
Iteration 37, loss = 2.62353218
Iteration 38, loss = 2.98539774
Iteration 39, loss = 2.88360072
Iteration 40, loss = 3.07456444
Iteration 41, loss = 2.66628409
Iteration 42, loss = 2.62343787
Iteration 43, loss = 2.78691047
Iteration 44, loss = 2.94289356
Iteration 45, loss = 2.59229871
Iteration 46, loss = 2.93837574
Iteration 47, loss = 2.63401457
Iteration 48, loss = 3.27079671
Iteration 49, loss = 2.58610926
Iteration 50, loss = 2.84922283
Iteration 51, loss = 2.56906392
Iteration 52, loss = 2.73644254
Iteration 53, loss = 2.43229312
Iteration 54, loss = 2.46684044
Iteration 55, loss = 2.64634028
Iteration 56, loss = 2.62710231
Iteration 57, loss = 2.38007142
Iteration 58, loss = 2.87957747
Iteration 59, loss = 2.44778804
Iteration 60, loss = 2.43110575
Iteration 61, loss = 2.30934848
Iteration 62, loss = 2.56469502
Iteration 63, loss = 2.23651078
Iteration 64, loss = 2.34117067
Iteration 65, loss = 2.15614489
Iteration 66, loss = 2.52242843
Iteration 67, loss = 2.22112613
Iteration 68, loss = 2.18365077
Iteration 69, loss = 2.05996747
Iteration 70, loss = 2.78138312
Iteration 71, loss = 2.65037799
Iteration 72, loss = 2.59041172
Iteration 73, loss = 2.76410387
Iteration 74, loss = 2.68662217
Iteration 75, loss = 2.51565052
Iteration 76, loss = 2.53344078
Iteration 77, loss = 2.76639159
Iteration 78, loss = 2.51879260
Iteration 79, loss = 2.03791696
Iteration 80, loss = 2.44700302
Iteration 81, loss = 2.13427087
Iteration 82, loss = 2.01647772
Iteration 83, loss = 2.17639682
Iteration 84, loss = 2.55557280
Iteration 85, loss = 2.30288417
Iteration 86, loss = 2.65777557
Iteration 87, loss = 2.69071571
Iteration 88, loss = 2.10379031
Iteration 89, loss = 2.16575751
Iteration 90, loss = 2.40487842
Iteration 91, loss = 2.55939725
Iteration 92, loss = 2.42553073
Iteration 93, loss = 2.31470446
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 20.10646887
Iteration 2, loss = 13.66792096
Iteration 3, loss = 14.11954466
Iteration 4, loss = 14.49430579
Iteration 5, loss = 13.95789172
Iteration 6, loss = 12.74027771
Iteration 7, loss = 12.30949714
Iteration 8, loss = 11.27223274
Iteration 9, loss = 10.79822042
Iteration 10, loss = 8.09362528
Iteration 11, loss = 7.21451335
Iteration 12, loss = 6.49950573
Iteration 13, loss = 6.80722559
Iteration 14, loss = 6.16805874
Iteration 15, loss = 6.39187021
Iteration 16, loss = 6.39107191
Iteration 17, loss = 6.34716860
Iteration 18, loss = 6.03770208
Iteration 19, loss = 6.40265309
Iteration 20, loss = 5.80117649
Iteration 21, loss = 6.54193648
Iteration 22, loss = 6.12701441
Iteration 23, loss = 5.75578821
Iteration 24, loss = 5.82793886
Iteration 25, loss = 6.23033110
Iteration 26, loss = 5.99278213
Iteration 27, loss = 6.15006946
Iteration 28, loss = 5.97190618
Iteration 29, loss = 6.00468546
Iteration 30, loss = 5.81787590
Iteration 31, loss = 6.03431810
Iteration 32, loss = 5.86745860
Iteration 33, loss = 6.57754796
Iteration 34, loss = 5.91434834
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 13.92064000
Iteration 2, loss = 12.22940933
Iteration 3, loss = 6.23790094
Iteration 4, loss = 2.62238487
Iteration 5, loss = 2.72830164
Iteration 6, loss = 2.51780919
Iteration 7, loss = 2.60900637
Iteration 8, loss = 2.30421145
Iteration 9, loss = 2.23729691
Iteration 10, loss = 2.42166190
Iteration 11, loss = 2.41293831
Iteration 12, loss = 2.73333111
Iteration 13, loss = 1.96767238
Iteration 14, loss = 1.92606334
Iteration 15, loss = 1.99700572
Iteration 16, loss = 2.49298335
Iteration 17, loss = 2.03079909
Iteration 18, loss = 2.30673359
Iteration 19, loss = 2.49153797
Iteration 20, loss = 2.36704462
Iteration 21, loss = 3.00790249
Iteration 22, loss = 2.39409269
Iteration 23, loss = 3.61343869
Iteration 24, loss = 2.55842499
Iteration 25, loss = 3.27887296
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.68219159
Iteration 2, loss = 0.62585689
Iteration 3, loss = 0.58821941
Iteration 4, loss = 0.55096288
Iteration 5, loss = 0.51663149
Iteration 6, loss = 0.47595868
Iteration 7, loss = 0.45689591
Iteration 8, loss = 0.44441603
Iteration 9, loss = 0.41672664
Iteration 10, loss = 0.39918608
Iteration 11, loss = 0.34899675
Iteration 12, loss = 0.33509851
Iteration 13, loss = 0.34306327
Iteration 14, loss = 0.33318018
Iteration 15, loss = 0.32695697
Iteration 16, loss = 0.34745001
Iteration 17, loss = 0.32015717
Iteration 18, loss = 0.30540039
Iteration 19, loss = 0.29834971
Iteration 20, loss = 0.29412681
Iteration 21, loss = 0.29051644
Iteration 22, loss = 0.28748636
Iteration 23, loss = 0.28488164
Iteration 24, loss = 0.28243192
Iteration 25, loss = 0.28014722
Iteration 26, loss = 0.27809378
Iteration 27, loss = 0.27624184
Iteration 28, loss = 0.27453987
Iteration 29, loss = 0.27231931
Iteration 30, loss = 0.27015628
Iteration 31, loss = 0.26881769
Iteration 32, loss = 0.26797287
Iteration 33, loss = 0.26521265
Iteration 34, loss = 0.26367770
Iteration 35, loss = 0.26246347
Iteration 36, loss = 0.26155005
Iteration 37, loss = 0.26060272
Iteration 38, loss = 0.25971926
Iteration 39, loss = 0.25874856
Iteration 40, loss = 0.25809851
Iteration 41, loss = 0.25736570
Iteration 42, loss = 0.25906457
Iteration 43, loss = 0.26712493
Iteration 44, loss = 0.26511858
Iteration 45, loss = 0.26332752
Iteration 46, loss = 0.26185293
Iteration 47, loss = 0.26049087
Iteration 48, loss = 0.25927244
Iteration 49, loss = 0.27019815
Iteration 50, loss = 0.29838617
Iteration 51, loss = 0.29491876
Iteration 52, loss = 0.29209841
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.81906224
Iteration 2, loss = 0.72371820
Iteration 3, loss = 0.67166515
Iteration 4, loss = 0.67020936
Iteration 5, loss = 0.66427655
Iteration 6, loss = 0.65094140
Iteration 7, loss = 0.71845631
Iteration 8, loss = 0.68693024
Iteration 9, loss = 0.69103532
Iteration 10, loss = 0.66967720
Iteration 11, loss = 0.65033252
Iteration 12, loss = 0.63469503
Iteration 13, loss = 0.62164129
Iteration 14, loss = 0.61068459
Iteration 15, loss = 0.59761025
Iteration 16, loss = 0.58596378
Iteration 17, loss = 0.57583149
Iteration 18, loss = 0.58265262
Iteration 19, loss = 0.56904682
Iteration 20, loss = 0.55815485
Iteration 21, loss = 0.54628210
Iteration 22, loss = 0.52502809
Iteration 23, loss = 0.53871896
Iteration 24, loss = 0.52831982
Iteration 25, loss = 0.47692199
Iteration 26, loss = 0.73062964
Iteration 27, loss = 0.57531390
Iteration 28, loss = 0.52669994
Iteration 29, loss = 0.47534512
Iteration 30, loss = 0.46877953
Iteration 31, loss = 0.45537436
Iteration 32, loss = 0.44876915
Iteration 33, loss = 0.43947008
Iteration 34, loss = 0.43078406
Iteration 35, loss = 0.41759735
Iteration 36, loss = 0.44296970
Iteration 37, loss = 0.42499293
Iteration 38, loss = 0.41326008
Iteration 39, loss = 0.40446947
Iteration 40, loss = 0.39717309
Iteration 41, loss = 0.40031593
Iteration 42, loss = 0.38572863
Iteration 43, loss = 0.37614323
Iteration 44, loss = 0.37141572
Iteration 45, loss = 0.36696046
Iteration 46, loss = 0.36287910
Iteration 47, loss = 0.35933471
Iteration 48, loss = 0.35600087
Iteration 49, loss = 0.35299371
Iteration 50, loss = 0.35022248
Iteration 51, loss = 0.34766891
Iteration 52, loss = 0.34528952
Iteration 53, loss = 0.34309682
Iteration 54, loss = 0.34090208
Iteration 55, loss = 0.33901193
Iteration 56, loss = 0.33719978
Iteration 57, loss = 0.33539532
Iteration 58, loss = 0.33380437
Iteration 59, loss = 0.33244522
Iteration 60, loss = 0.33221889
Iteration 61, loss = 0.34726820
Iteration 62, loss = 0.34921166
Iteration 63, loss = 0.34701461
Iteration 64, loss = 0.34503869
Iteration 65, loss = 0.34328825
Iteration 66, loss = 0.34160656
Iteration 67, loss = 0.33993016
Iteration 68, loss = 0.33808947
Iteration 69, loss = 0.33639760
Iteration 70, loss = 0.33373090
Iteration 71, loss = 0.33169984
Iteration 72, loss = 0.33038860
Iteration 73, loss = 0.32910365
Iteration 74, loss = 0.32796822
Iteration 75, loss = 0.32688856
Iteration 76, loss = 0.32587376
Iteration 77, loss = 0.32487630
Iteration 78, loss = 0.32398577
Iteration 79, loss = 0.32315542
Iteration 80, loss = 0.32221422
Iteration 81, loss = 0.32140336
Iteration 82, loss = 0.32074095
Iteration 83, loss = 0.32004317
Iteration 84, loss = 0.31942046
Iteration 85, loss = 0.31871257
Iteration 86, loss = 0.31810335
Iteration 87, loss = 0.31758746
Iteration 88, loss = 0.31685684
Iteration 89, loss = 0.31632890
Iteration 90, loss = 0.31584371
Iteration 91, loss = 0.31526156
Iteration 92, loss = 0.31210529
Iteration 93, loss = 0.31113592
Iteration 94, loss = 0.31065396
Iteration 95, loss = 0.31010192
Iteration 96, loss = 0.30831550
Iteration 97, loss = 0.30542376
Iteration 98, loss = 0.30439701
Iteration 99, loss = 0.30379640
Iteration 100, loss = 0.30317306
Iteration 101, loss = 0.30269224
Iteration 102, loss = 0.30217333
Iteration 103, loss = 0.30165980
Iteration 104, loss = 0.30124014
Iteration 105, loss = 0.30085693
Iteration 106, loss = 0.30038738
Iteration 107, loss = 0.29906566
Iteration 108, loss = 0.29775033
Iteration 109, loss = 0.29724706
Iteration 110, loss = 0.29669261
Iteration 111, loss = 0.29613401
Iteration 112, loss = 0.29563443
Iteration 113, loss = 0.29516625
Iteration 114, loss = 0.29471408
Iteration 115, loss = 0.29388443
Iteration 116, loss = 0.29351711
Iteration 117, loss = 0.29397106
Iteration 118, loss = 0.29514962
Iteration 119, loss = 0.29326344
Iteration 120, loss = 0.29229938
Iteration 121, loss = 0.29180480
Iteration 122, loss = 0.29145273
Iteration 123, loss = 0.29128340
Iteration 124, loss = 0.29107663
Iteration 125, loss = 0.29095560
Iteration 126, loss = 0.29369651
Iteration 127, loss = 0.29305112
Iteration 128, loss = 0.29272488
Iteration 129, loss = 0.29254653
Iteration 130, loss = 0.29239242
Iteration 131, loss = 0.29224326
Iteration 132, loss = 0.29212964
Iteration 133, loss = 0.29193720
Iteration 134, loss = 0.29181242
Iteration 135, loss = 0.29165315
Iteration 136, loss = 0.29153973
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.91394817
Iteration 2, loss = 0.80415978
Iteration 3, loss = 0.71385999
Iteration 4, loss = 0.66297701
Iteration 5, loss = 0.62001591
Iteration 6, loss = 0.57564015
Iteration 7, loss = 0.55563760
Iteration 8, loss = 0.56485799
Iteration 9, loss = 0.53362742
Iteration 10, loss = 0.51032493
Iteration 11, loss = 0.47497348
Iteration 12, loss = 0.45174380
Iteration 13, loss = 0.44347779
Iteration 14, loss = 0.45561669
Iteration 15, loss = 0.41391764
Iteration 16, loss = 0.40249854
Iteration 17, loss = 0.39503055
Iteration 18, loss = 0.38631694
Iteration 19, loss = 0.36059742
Iteration 20, loss = 0.33695053
Iteration 21, loss = 0.32326000
Iteration 22, loss = 0.39300251
Iteration 23, loss = 0.45090641
Iteration 24, loss = 0.42308233
Iteration 25, loss = 0.42175570
Iteration 26, loss = 0.42097984
Iteration 27, loss = 0.40695301
Iteration 28, loss = 0.40410489
Iteration 29, loss = 0.40441888
Iteration 30, loss = 0.39270684
Iteration 31, loss = 0.37253948
Iteration 32, loss = 0.36602795
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 1.17755919
Iteration 2, loss = 1.03778414
Iteration 3, loss = 0.94485746
Iteration 4, loss = 0.86286599
Iteration 5, loss = 0.77764576
Iteration 6, loss = 0.71086361
Iteration 7, loss = 0.65759527
Iteration 8, loss = 0.61728069
Iteration 9, loss = 0.57827755
Iteration 10, loss = 0.54187265
Iteration 11, loss = 0.51500991
Iteration 12, loss = 0.48833101
Iteration 13, loss = 0.46571271
Iteration 14, loss = 0.44112607
Iteration 15, loss = 0.42241523
Iteration 16, loss = 0.36193040
Iteration 17, loss = 0.33957271
Iteration 18, loss = 0.33519448
Iteration 19, loss = 0.33157374
Iteration 20, loss = 0.31792714
Iteration 21, loss = 0.30292146
Iteration 22, loss = 0.32231647
Iteration 23, loss = 0.32865175
Iteration 24, loss = 0.30348526
Iteration 25, loss = 0.28440970
Iteration 26, loss = 0.27335360
Iteration 27, loss = 0.26296372
Iteration 28, loss = 0.25349651
Iteration 29, loss = 0.24780639
Iteration 30, loss = 0.24314971
Iteration 31, loss = 0.23910442
Iteration 32, loss = 0.23558883
Iteration 33, loss = 0.23234695
Iteration 34, loss = 0.22935686
Iteration 35, loss = 0.22699032
Iteration 36, loss = 0.22399090
Iteration 37, loss = 0.22159418
Iteration 38, loss = 0.21936474
Iteration 39, loss = 0.21728227
Iteration 40, loss = 0.21551393
Iteration 41, loss = 0.21620933
Iteration 42, loss = 0.22330639
Iteration 43, loss = 0.22001064
Iteration 44, loss = 0.21782750
Iteration 45, loss = 0.21616281
Iteration 46, loss = 0.21471249
Iteration 47, loss = 0.21340837
Iteration 48, loss = 0.21215586
Iteration 49, loss = 0.21100005
Iteration 50, loss = 0.20992746
Iteration 51, loss = 0.20888026
Iteration 52, loss = 0.20792231
Iteration 53, loss = 0.20708058
Iteration 54, loss = 0.20685628
Iteration 55, loss = 0.20579730
Iteration 56, loss = 0.20500413
Iteration 57, loss = 0.20429366
Iteration 58, loss = 0.20520496
Iteration 59, loss = 0.21642477
Iteration 60, loss = 0.21277902
Iteration 61, loss = 0.21051468
Iteration 62, loss = 0.20897243
Iteration 63, loss = 0.20773702
Iteration 64, loss = 0.20661296
Iteration 65, loss = 0.20565513
Iteration 66, loss = 0.20473811
Iteration 67, loss = 0.20391339
Iteration 68, loss = 0.20317147
Iteration 69, loss = 0.20244580
Iteration 70, loss = 0.20398167
Iteration 71, loss = 0.20049564
Iteration 72, loss = 0.19950076
Iteration 73, loss = 0.19900893
Iteration 74, loss = 0.19849368
Iteration 75, loss = 0.19802234
Iteration 76, loss = 0.19775848
Iteration 77, loss = 0.19238661
Iteration 78, loss = 0.19100219
Iteration 79, loss = 0.19032767
Iteration 80, loss = 0.18982159
Iteration 81, loss = 0.20275611
Iteration 82, loss = 0.22712987
Iteration 83, loss = 0.21875543
Iteration 84, loss = 0.23683867
Iteration 85, loss = 0.27978288
Iteration 86, loss = 0.22743864
Iteration 87, loss = 0.21935164
Iteration 88, loss = 0.21441931
Iteration 89, loss = 0.20702847
Iteration 90, loss = 0.20588268
Iteration 91, loss = 0.20524265
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.94253227
Iteration 2, loss = 0.84067696
Iteration 3, loss = 0.78424489
Iteration 4, loss = 0.73842600
Iteration 5, loss = 0.70139220
Iteration 6, loss = 0.66972671
Iteration 7, loss = 0.64589362
Iteration 8, loss = 0.63155161
Iteration 9, loss = 0.62041750
Iteration 10, loss = 0.56149053
Iteration 11, loss = 0.54065433
Iteration 12, loss = 0.52227058
Iteration 13, loss = 0.51530045
Iteration 14, loss = 0.54189465
Iteration 15, loss = 0.50532232
Iteration 16, loss = 0.47581932
Iteration 17, loss = 0.38229520
Iteration 18, loss = 0.36370279
Iteration 19, loss = 0.33781768
Iteration 20, loss = 0.34719593
Iteration 21, loss = 0.33015413
Iteration 22, loss = 0.31190182
Iteration 23, loss = 0.45496350
Iteration 24, loss = 0.45878472
Iteration 25, loss = 0.42731142
Iteration 26, loss = 0.41327881
Iteration 27, loss = 0.40357415
Iteration 28, loss = 0.39800692
Iteration 29, loss = 0.39318782
Iteration 30, loss = 0.38631397
Iteration 31, loss = 0.37975029
Iteration 32, loss = 0.37459608
Iteration 33, loss = 0.37247635
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 14.21236877
Iteration 2, loss = 13.82814172
Iteration 3, loss = 13.45827508
Iteration 4, loss = 12.58624806
Iteration 5, loss = 11.25538440
Iteration 6, loss = 9.82547495
Iteration 7, loss = 8.24870721
Iteration 8, loss = 5.84177859
Iteration 9, loss = 1.46136125
Iteration 10, loss = 0.77960863
Iteration 11, loss = 0.76786257
Iteration 12, loss = 0.75833009
Iteration 13, loss = 0.75032723
Iteration 14, loss = 0.74020619
Iteration 15, loss = 0.73114415
Iteration 16, loss = 0.72631494
Iteration 17, loss = 0.72226507
Iteration 18, loss = 0.71882991
Iteration 19, loss = 0.71593967
Iteration 20, loss = 0.71354980
Iteration 21, loss = 0.71151080
Iteration 22, loss = 0.70983236
Iteration 23, loss = 0.70846117
Iteration 24, loss = 0.70734203
Iteration 25, loss = 0.70641039
Iteration 26, loss = 0.70228206
Iteration 27, loss = 0.70167941
Iteration 28, loss = 0.70119299
Iteration 29, loss = 0.69744861
Iteration 30, loss = 0.69714062
Iteration 31, loss = 0.69691484
Iteration 32, loss = 0.69673407
Iteration 33, loss = 0.69659417
Iteration 34, loss = 0.69648819
Iteration 35, loss = 0.69641515
Iteration 36, loss = 0.69641971
Iteration 37, loss = 0.69301461
Iteration 38, loss = 0.69298599
Iteration 39, loss = 0.69295776
Iteration 40, loss = 0.69294422
Iteration 41, loss = 0.69292627
Iteration 42, loss = 0.69292964
Iteration 43, loss = 0.69291174
Iteration 44, loss = 0.69291074
Iteration 45, loss = 0.69290969
Iteration 46, loss = 0.69291873
Iteration 47, loss = 0.69291196
Iteration 48, loss = 0.69289801
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 15.70064620
Iteration 2, loss = 13.13589465
Iteration 3, loss = 12.16983908
Iteration 4, loss = 12.13503992
Iteration 5, loss = 11.66642000
Iteration 6, loss = 11.50051410
Iteration 7, loss = 10.73073641
Iteration 8, loss = 10.55687596
Iteration 9, loss = 10.12837697
Iteration 10, loss = 9.68038641
Iteration 11, loss = 9.21156813
Iteration 12, loss = 9.03035804
Iteration 13, loss = 8.76022005
Iteration 14, loss = 8.52588180
Iteration 15, loss = 8.21514606
Iteration 16, loss = 8.20601974
Iteration 17, loss = 7.93129648
Iteration 18, loss = 7.96044987
Iteration 19, loss = 7.74339191
Iteration 20, loss = 7.61641283
Iteration 21, loss = 7.63566318
Iteration 22, loss = 7.59858604
Iteration 23, loss = 7.60562463
Iteration 24, loss = 7.45644865
Iteration 25, loss = 7.28332045
Iteration 26, loss = 7.40924193
Iteration 27, loss = 7.71647377
Iteration 28, loss = 7.41687836
Iteration 29, loss = 7.49397288
Iteration 30, loss = 7.33590704
Iteration 31, loss = 7.39317021
Iteration 32, loss = 7.19834950
Iteration 33, loss = 7.26558415
Iteration 34, loss = 7.28161837
Iteration 35, loss = 6.98718239
Iteration 36, loss = 7.11165841
Iteration 37, loss = 6.57379352
Iteration 38, loss = 5.63807085
Iteration 39, loss = 3.41673734
Iteration 40, loss = 3.32503003
Iteration 41, loss = 3.76835901
Iteration 42, loss = 3.77432193
Iteration 43, loss = 4.94716258
Iteration 44, loss = 3.77135997
Iteration 45, loss = 3.53270353
Iteration 46, loss = 3.13826869
Iteration 47, loss = 3.90726476
Iteration 48, loss = 4.36938086
Iteration 49, loss = 4.39815085
Iteration 50, loss = 3.61530299
Iteration 51, loss = 3.72261068
Iteration 52, loss = 3.76907704
Iteration 53, loss = 4.06278613
Iteration 54, loss = 4.04057127
Iteration 55, loss = 3.50776856
Iteration 56, loss = 4.76886985
Iteration 57, loss = 4.73488399
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 18.02449793
Iteration 2, loss = 18.06563975
Iteration 3, loss = 17.80791228
Iteration 4, loss = 16.95345149
Iteration 5, loss = 11.78012120
Iteration 6, loss = 6.98716884
Iteration 7, loss = 6.31251832
Iteration 8, loss = 5.60439893
Iteration 9, loss = 4.60984866
Iteration 10, loss = 1.75135857
Iteration 11, loss = 0.77857161
Iteration 12, loss = 0.77778799
Iteration 13, loss = 0.77704838
Iteration 14, loss = 0.77298558
Iteration 15, loss = 0.77234140
Iteration 16, loss = 0.77175557
Iteration 17, loss = 0.77128513
Iteration 18, loss = 0.77079986
Iteration 19, loss = 0.76025062
Iteration 20, loss = 0.75648991
Iteration 21, loss = 0.75621951
Iteration 22, loss = 0.75590910
Iteration 23, loss = 0.75563854
Iteration 24, loss = 0.75541566
Iteration 25, loss = 0.75184780
Iteration 26, loss = 0.75168686
Iteration 27, loss = 0.75154369
Iteration 28, loss = 0.75149760
Iteration 29, loss = 0.75145547
Iteration 30, loss = 0.75144422
Iteration 31, loss = 0.75144012
Iteration 32, loss = 0.75138593
Iteration 33, loss = 0.74796176
Iteration 34, loss = 0.74799403
Iteration 35, loss = 0.74459245
Iteration 36, loss = 0.74457062
Iteration 37, loss = 0.74461293
Iteration 38, loss = 0.74466746
Iteration 39, loss = 0.74143183
Iteration 40, loss = 0.74140750
Iteration 41, loss = 0.74146235
Iteration 42, loss = 0.74145588
Iteration 43, loss = 0.74153148
Iteration 44, loss = 0.74151623
Iteration 45, loss = 0.74151753
Iteration 46, loss = 0.74151902
Iteration 47, loss = 0.74158178
Iteration 48, loss = 0.74157700
Iteration 49, loss = 0.74157607
Iteration 50, loss = 0.74157361
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 17.97439089
Iteration 2, loss = 17.92432759
Iteration 3, loss = 17.51425281
Iteration 4, loss = 15.40042134
Iteration 5, loss = 13.31075996
Iteration 6, loss = 12.53048677
Iteration 7, loss = 11.55960150
Iteration 8, loss = 9.08118287
Iteration 9, loss = 7.38935600
Iteration 10, loss = 6.02300091
Iteration 11, loss = 4.69212436
Iteration 12, loss = 3.02309799
Iteration 13, loss = 1.24741280
Iteration 14, loss = 0.86866272
Iteration 15, loss = 0.85316113
Iteration 16, loss = 0.82435821
Iteration 17, loss = 0.81274191
Iteration 18, loss = 0.79781883
Iteration 19, loss = 0.78647586
Iteration 20, loss = 0.77533363
Iteration 21, loss = 0.76110709
Iteration 22, loss = 0.75713037
Iteration 23, loss = 0.75003164
Iteration 24, loss = 0.74652395
Iteration 25, loss = 0.74325168
Iteration 26, loss = 0.74021182
Iteration 27, loss = 0.73740715
Iteration 28, loss = 0.73484917
Iteration 29, loss = 0.73246417
Iteration 30, loss = 0.73026878
Iteration 31, loss = 0.72825871
Iteration 32, loss = 0.72650957
Iteration 33, loss = 0.72487889
Iteration 34, loss = 0.72340135
Iteration 35, loss = 0.72207171
Iteration 36, loss = 0.72086911
Iteration 37, loss = 0.71980107
Iteration 38, loss = 0.71549069
Iteration 39, loss = 0.71465446
Iteration 40, loss = 0.71391352
Iteration 41, loss = 0.71327186
Iteration 42, loss = 0.71271133
Iteration 43, loss = 0.71222010
Iteration 44, loss = 0.71179712
Iteration 45, loss = 0.71143761
Iteration 46, loss = 0.71112576
Iteration 47, loss = 0.71085854
Iteration 48, loss = 0.71064275
Iteration 49, loss = 0.71046134
Iteration 50, loss = 0.71030472
Iteration 51, loss = 0.70680830
Iteration 52, loss = 0.70670099
Iteration 53, loss = 0.70661010
Iteration 54, loss = 0.70654273
Iteration 55, loss = 0.70648296
Iteration 56, loss = 0.70643849
Iteration 57, loss = 0.70640266
Iteration 58, loss = 0.70637586
Iteration 59, loss = 0.70634853
Iteration 60, loss = 0.70633094
Iteration 61, loss = 0.70631853
Iteration 62, loss = 0.70630541
Iteration 63, loss = 0.70630660
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 13.65666773
Iteration 2, loss = 11.63400724
Iteration 3, loss = 11.25792610
Iteration 4, loss = 11.21008208
Iteration 5, loss = 3.58921926
Iteration 6, loss = 3.57245835
Iteration 7, loss = 3.37896616
Iteration 8, loss = 3.40689733
Iteration 9, loss = 3.42559774
Iteration 10, loss = 3.32959246
Iteration 11, loss = 3.22184205
Iteration 12, loss = 3.26600424
Iteration 13, loss = 3.01627832
Iteration 14, loss = 3.29366069
Iteration 15, loss = 3.01950086
Iteration 16, loss = 3.15034803
Iteration 17, loss = 3.37526309
Iteration 18, loss = 3.15955891
Iteration 19, loss = 3.07834006
Iteration 20, loss = 2.93769084
Iteration 21, loss = 3.26295724
Iteration 22, loss = 2.94768703
Iteration 23, loss = 2.98621475
Iteration 24, loss = 2.94293418
Iteration 25, loss = 2.96843351
Iteration 26, loss = 2.84310260
Iteration 27, loss = 3.03274618
Iteration 28, loss = 2.81894142
Iteration 29, loss = 2.94263761
Iteration 30, loss = 2.97213894
Iteration 31, loss = 3.00788000
Iteration 32, loss = 2.97365602
Iteration 33, loss = 2.79426480
Iteration 34, loss = 2.87392930
Iteration 35, loss = 2.96300372
Iteration 36, loss = 2.95925597
Iteration 37, loss = 2.59198257
Iteration 38, loss = 3.28094018
Iteration 39, loss = 3.08131841
Iteration 40, loss = 2.74723094
Iteration 41, loss = 3.12411019
Iteration 42, loss = 2.75292694
Iteration 43, loss = 2.92092137
Iteration 44, loss = 2.94600788
Iteration 45, loss = 2.83254850
Iteration 46, loss = 2.68660162
Iteration 47, loss = 3.10586184
Iteration 48, loss = 2.97327470
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.56333954
Iteration 2, loss = 0.31927972
Iteration 3, loss = 0.23888535
Iteration 4, loss = 0.18619192
Iteration 5, loss = 0.18914742
Iteration 6, loss = 0.18841956
Iteration 7, loss = 0.20894363
Iteration 8, loss = 0.19887317
Iteration 9, loss = 0.19849882
Iteration 10, loss = 0.20756230
Iteration 11, loss = 0.21843011
Iteration 12, loss = 0.21576486
Iteration 13, loss = 0.21221351
Iteration 14, loss = 0.19349459
Iteration 15, loss = 0.18527131
Iteration 16, loss = 0.18325674
Iteration 17, loss = 0.18376208
Iteration 18, loss = 0.18208527
Iteration 19, loss = 0.17206052
Iteration 20, loss = 0.16719533
Iteration 21, loss = 0.18035221
Iteration 22, loss = 0.16737643
Iteration 23, loss = 0.17720348
Iteration 24, loss = 0.17130702
Iteration 25, loss = 0.20055247
Iteration 26, loss = 0.20426889
Iteration 27, loss = 0.20360461
Iteration 28, loss = 0.19932186
Iteration 29, loss = 0.21056915
Iteration 30, loss = 0.20568721
Iteration 31, loss = 0.20610240
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.52071957
Iteration 2, loss = 0.29215665
Iteration 3, loss = 0.22314073
Iteration 4, loss = 0.20272725
Iteration 5, loss = 0.20517040
Iteration 6, loss = 0.19279963
Iteration 7, loss = 0.19833261
Iteration 8, loss = 0.20357296
Iteration 9, loss = 0.18310910
Iteration 10, loss = 0.16730385
Iteration 11, loss = 0.19003368
Iteration 12, loss = 0.20646803
Iteration 13, loss = 0.20585662
Iteration 14, loss = 0.20164412
Iteration 15, loss = 0.20870620
Iteration 16, loss = 0.20255516
Iteration 17, loss = 0.18309003
Iteration 18, loss = 0.18944165
Iteration 19, loss = 0.18774271
Iteration 20, loss = 0.20039881
Iteration 21, loss = 0.18453060
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.54057979
Iteration 2, loss = 0.30287790
Iteration 3, loss = 0.21111471
Iteration 4, loss = 0.18531361
Iteration 5, loss = 0.18113553
Iteration 6, loss = 0.18234874
Iteration 7, loss = 0.18011943
Iteration 8, loss = 0.18424360
Iteration 9, loss = 0.18129360
Iteration 10, loss = 0.17864823
Iteration 11, loss = 0.16187025
Iteration 12, loss = 0.16112949
Iteration 13, loss = 0.18061787
Iteration 14, loss = 0.19214831
Iteration 15, loss = 0.18742624
Iteration 16, loss = 0.18347024
Iteration 17, loss = 0.17858718
Iteration 18, loss = 0.18681752
Iteration 19, loss = 0.17919107
Iteration 20, loss = 0.21769921
Iteration 21, loss = 0.21509783
Iteration 22, loss = 0.21141633
Iteration 23, loss = 0.21058715
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.54580684
Iteration 2, loss = 0.30536884
Iteration 3, loss = 0.22395970
Iteration 4, loss = 0.20248515
Iteration 5, loss = 0.18067342
Iteration 6, loss = 0.17880403
Iteration 7, loss = 0.17751538
Iteration 8, loss = 0.18568338
Iteration 9, loss = 0.18701535
Iteration 10, loss = 0.18166297
Iteration 11, loss = 0.18376146
Iteration 12, loss = 0.17372328
Iteration 13, loss = 0.18907403
Iteration 14, loss = 0.19329776
Iteration 15, loss = 0.19950033
Iteration 16, loss = 0.20482623
Iteration 17, loss = 0.19749741
Iteration 18, loss = 0.22520701
Iteration 19, loss = 0.19611367
Iteration 20, loss = 0.21100027
Iteration 21, loss = 0.22420820
Iteration 22, loss = 0.22818931
Iteration 23, loss = 0.24222768
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.49946258
Iteration 2, loss = 0.26544979
Iteration 3, loss = 0.20149044
Iteration 4, loss = 0.19772119
Iteration 5, loss = 0.18412331
Iteration 6, loss = 0.18700681
Iteration 7, loss = 0.18265451
Iteration 8, loss = 0.19121632
Iteration 9, loss = 0.21572989
Iteration 10, loss = 0.19862518
Iteration 11, loss = 0.18582634
Iteration 12, loss = 0.17424072
Iteration 13, loss = 0.19349395
Iteration 14, loss = 0.18128766
Iteration 15, loss = 0.17548332
Iteration 16, loss = 0.17819729
Iteration 17, loss = 0.17143717
Iteration 18, loss = 0.19913968
Iteration 19, loss = 0.20008711
Iteration 20, loss = 0.18959513
Iteration 21, loss = 0.18850248
Iteration 22, loss = 0.19620917
Iteration 23, loss = 0.18728464
Iteration 24, loss = 0.18372276
Iteration 25, loss = 0.18709013
Iteration 26, loss = 0.21530663
Iteration 27, loss = 0.21089696
Iteration 28, loss = 0.21091997
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.48844279
Iteration 2, loss = 0.35576110
Iteration 3, loss = 0.30103997
Iteration 4, loss = 0.27150357
Iteration 5, loss = 0.26447082
Iteration 6, loss = 0.26022515
Iteration 7, loss = 0.23058663
Iteration 8, loss = 0.22035131
Iteration 9, loss = 0.21011183
Iteration 10, loss = 0.19176121
Iteration 11, loss = 0.18549204
Iteration 12, loss = 0.22445112
Iteration 13, loss = 0.21520330
Iteration 14, loss = 0.20394124
Iteration 15, loss = 0.19982283
Iteration 16, loss = 0.19102256
Iteration 17, loss = 0.18380387
Iteration 18, loss = 0.18454006
Iteration 19, loss = 0.19733422
Iteration 20, loss = 0.19458088
Iteration 21, loss = 0.19062359
Iteration 22, loss = 0.18827442
Iteration 23, loss = 0.18550257
Iteration 24, loss = 0.18926138
Iteration 25, loss = 0.18423747
Iteration 26, loss = 0.18081582
Iteration 27, loss = 0.17355312
Iteration 28, loss = 0.17326673
Iteration 29, loss = 0.17200173
Iteration 30, loss = 0.17150173
Iteration 31, loss = 0.17059181
Iteration 32, loss = 0.16977124
Iteration 33, loss = 0.16852731
Iteration 34, loss = 0.16771926
Iteration 35, loss = 0.16694449
Iteration 36, loss = 0.16643485
Iteration 37, loss = 0.16579238
Iteration 38, loss = 0.16508507
Iteration 39, loss = 0.16438434
Iteration 40, loss = 0.16314462
Iteration 41, loss = 0.16257858
Iteration 42, loss = 0.16259975
Iteration 43, loss = 0.16221582
Iteration 44, loss = 0.16198954
Iteration 45, loss = 0.16237024
Iteration 46, loss = 0.16378635
Iteration 47, loss = 0.16357543
Iteration 48, loss = 0.16298248
Iteration 49, loss = 0.16250985
Iteration 50, loss = 0.16266996
Iteration 51, loss = 0.16274727
Iteration 52, loss = 0.16200313
Iteration 53, loss = 0.16145991
Iteration 54, loss = 0.16145408
Iteration 55, loss = 0.16139328
Iteration 56, loss = 0.16150003
Iteration 57, loss = 0.16103778
Iteration 58, loss = 0.16056015
Iteration 59, loss = 0.16021397
Iteration 60, loss = 0.15980524
Iteration 61, loss = 0.15929387
Iteration 62, loss = 0.15618170
Iteration 63, loss = 0.15472272
Iteration 64, loss = 0.15471441
Iteration 65, loss = 0.15366607
Iteration 66, loss = 0.15299702
Iteration 67, loss = 0.15339745
Iteration 68, loss = 0.15240394
Iteration 69, loss = 0.15247929
Iteration 70, loss = 0.15747322
Iteration 71, loss = 0.15839038
Iteration 72, loss = 0.15829914
Iteration 73, loss = 0.15777685
Iteration 74, loss = 0.15771108
Iteration 75, loss = 0.15644440
Iteration 76, loss = 0.15596166
Iteration 77, loss = 0.15645766
Iteration 78, loss = 0.15551542
Iteration 79, loss = 0.15429819
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.94888670
Iteration 2, loss = 0.68361811
Iteration 3, loss = 0.51994798
Iteration 4, loss = 0.41958436
Iteration 5, loss = 0.36080822
Iteration 6, loss = 0.32324592
Iteration 7, loss = 0.29947093
Iteration 8, loss = 0.28747406
Iteration 9, loss = 0.29066976
Iteration 10, loss = 0.27794201
Iteration 11, loss = 0.26452867
Iteration 12, loss = 0.25600521
Iteration 13, loss = 0.24286448
Iteration 14, loss = 0.22973147
Iteration 15, loss = 0.24662062
Iteration 16, loss = 0.24552298
Iteration 17, loss = 0.23902820
Iteration 18, loss = 0.23546979
Iteration 19, loss = 0.23217552
Iteration 20, loss = 0.23052246
Iteration 21, loss = 0.22545066
Iteration 22, loss = 0.22345277
Iteration 23, loss = 0.21948235
Iteration 24, loss = 0.21893183
Iteration 25, loss = 0.21589210
Iteration 26, loss = 0.21344734
Iteration 27, loss = 0.21674953
Iteration 28, loss = 0.21130301
Iteration 29, loss = 0.20908487
Iteration 30, loss = 0.20731906
Iteration 31, loss = 0.20672170
Iteration 32, loss = 0.20943247
Iteration 33, loss = 0.20698425
Iteration 34, loss = 0.21695512
Iteration 35, loss = 0.21877075
Iteration 36, loss = 0.21444673
Iteration 37, loss = 0.21242766
Iteration 38, loss = 0.21972690
Iteration 39, loss = 0.21649963
Iteration 40, loss = 0.21087278
Iteration 41, loss = 0.20912002
Iteration 42, loss = 0.20781259
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.74761617
Iteration 2, loss = 0.54423770
Iteration 3, loss = 0.45191969
Iteration 4, loss = 0.38037602
Iteration 5, loss = 0.33772102
Iteration 6, loss = 0.30386116
Iteration 7, loss = 0.28333858
Iteration 8, loss = 0.27131880
Iteration 9, loss = 0.24027295
Iteration 10, loss = 0.22790470
Iteration 11, loss = 0.22153876
Iteration 12, loss = 0.20780587
Iteration 13, loss = 0.21341143
Iteration 14, loss = 0.19604143
Iteration 15, loss = 0.18416548
Iteration 16, loss = 0.17923019
Iteration 17, loss = 0.18238248
Iteration 18, loss = 0.18202761
Iteration 19, loss = 0.19020372
Iteration 20, loss = 0.19659302
Iteration 21, loss = 0.18555255
Iteration 22, loss = 0.18400376
Iteration 23, loss = 0.18568276
Iteration 24, loss = 0.17888459
Iteration 25, loss = 0.17712595
Iteration 26, loss = 0.17394073
Iteration 27, loss = 0.16852515
Iteration 28, loss = 0.16667644
Iteration 29, loss = 0.16571996
Iteration 30, loss = 0.16392407
Iteration 31, loss = 0.16201238
Iteration 32, loss = 0.16976907
Iteration 33, loss = 0.18497385
Iteration 34, loss = 0.17988977
Iteration 35, loss = 0.17829196
Iteration 36, loss = 0.17477921
Iteration 37, loss = 0.17439939
Iteration 38, loss = 0.18021709
Iteration 39, loss = 0.20232331
Iteration 40, loss = 0.19502142
Iteration 41, loss = 0.19256155
Iteration 42, loss = 0.19176732
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.63266960
Iteration 2, loss = 0.42885978
Iteration 3, loss = 0.32793099
Iteration 4, loss = 0.28645974
Iteration 5, loss = 0.24134752
Iteration 6, loss = 0.23222980
Iteration 7, loss = 0.23720929
Iteration 8, loss = 0.20875403
Iteration 9, loss = 0.21484150
Iteration 10, loss = 0.19872834
Iteration 11, loss = 0.18964414
Iteration 12, loss = 0.17537441
Iteration 13, loss = 0.18424992
Iteration 14, loss = 0.19681135
Iteration 15, loss = 0.19343606
Iteration 16, loss = 0.19282685
Iteration 17, loss = 0.19015643
Iteration 18, loss = 0.18694595
Iteration 19, loss = 0.18729059
Iteration 20, loss = 0.18360208
Iteration 21, loss = 0.18253935
Iteration 22, loss = 0.18067475
Iteration 23, loss = 0.18020710
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.61212833
Iteration 2, loss = 0.43393482
Iteration 3, loss = 0.36092854
Iteration 4, loss = 0.30991187
Iteration 5, loss = 0.25995092
Iteration 6, loss = 0.22072028
Iteration 7, loss = 0.20805616
Iteration 8, loss = 0.20460586
Iteration 9, loss = 0.20849914
Iteration 10, loss = 0.20237378
Iteration 11, loss = 0.19486610
Iteration 12, loss = 0.19113025
Iteration 13, loss = 0.19010040
Iteration 14, loss = 0.19159061
Iteration 15, loss = 0.18587337
Iteration 16, loss = 0.19039714
Iteration 17, loss = 0.18865057
Iteration 18, loss = 0.18987706
Iteration 19, loss = 0.20175793
Iteration 20, loss = 0.19498047
Iteration 21, loss = 0.20583175
Iteration 22, loss = 0.20256062
Iteration 23, loss = 0.19867913
Iteration 24, loss = 0.19241658
Iteration 25, loss = 0.19929523
Iteration 26, loss = 0.19797497
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 14.31402802
Iteration 2, loss = 8.39399324
Iteration 3, loss = 8.49642489
Iteration 4, loss = 8.10444464
Iteration 5, loss = 7.79059164
Iteration 6, loss = 5.89246667
Iteration 7, loss = 5.23233833
Iteration 8, loss = 4.62153280
Iteration 9, loss = 4.15186222
Iteration 10, loss = 4.28422074
Iteration 11, loss = 3.73075236
Iteration 12, loss = 3.74008725
Iteration 13, loss = 3.79905135
Iteration 14, loss = 3.53871222
Iteration 15, loss = 3.86187126
Iteration 16, loss = 3.95166097
Iteration 17, loss = 4.08641905
Iteration 18, loss = 3.33787104
Iteration 19, loss = 3.65966919
Iteration 20, loss = 3.99267523
Iteration 21, loss = 3.55987271
Iteration 22, loss = 4.41756115
Iteration 23, loss = 4.17157196
Iteration 24, loss = 4.25362850
Iteration 25, loss = 3.84805230
Iteration 26, loss = 3.79042210
Iteration 27, loss = 4.15123867
Iteration 28, loss = 3.63684186
Iteration 29, loss = 3.61568448
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 22.07749324
Iteration 2, loss = 22.03801283
Iteration 3, loss = 22.29766840
Iteration 4, loss = 22.37898554
Iteration 5, loss = 13.08683476
Iteration 6, loss = 13.70764918
Iteration 7, loss = 13.40138429
Iteration 8, loss = 13.18948853
Iteration 9, loss = 13.28807375
Iteration 10, loss = 13.09783491
Iteration 11, loss = 12.58563067
Iteration 12, loss = 12.83100852
Iteration 13, loss = 12.54689741
Iteration 14, loss = 12.86569326
Iteration 15, loss = 13.41856996
Iteration 16, loss = 12.98288600
Iteration 17, loss = 11.05267574
Iteration 18, loss = 9.51808256
Iteration 19, loss = 7.97650660
Iteration 20, loss = 5.19770652
Iteration 21, loss = 5.32299714
Iteration 22, loss = 4.15144904
Iteration 23, loss = 3.02396597
Iteration 24, loss = 3.22302636
Iteration 25, loss = 2.41348634
Iteration 26, loss = 3.54395875
Iteration 27, loss = 3.19199953
Iteration 28, loss = 3.30171801
Iteration 29, loss = 2.69973138
Iteration 30, loss = 3.33342897
Iteration 31, loss = 2.92405051
Iteration 32, loss = 3.01969686
Iteration 33, loss = 2.57772521
Iteration 34, loss = 2.85386976
Iteration 35, loss = 3.04905658
Iteration 36, loss = 2.49067156
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 18.01131308
Iteration 2, loss = 17.06347538
Iteration 3, loss = 16.34609570
Iteration 4, loss = 13.33544463
Iteration 5, loss = 13.90736221
Iteration 6, loss = 13.23615183
Iteration 7, loss = 13.54842323
Iteration 8, loss = 12.20496155
Iteration 9, loss = 11.25423083
Iteration 10, loss = 9.22076452
Iteration 11, loss = 9.32822531
Iteration 12, loss = 8.49567645
Iteration 13, loss = 7.99332844
Iteration 14, loss = 8.01094365
Iteration 15, loss = 7.31852029
Iteration 16, loss = 7.20446788
Iteration 17, loss = 7.00039363
Iteration 18, loss = 6.58237885
Iteration 19, loss = 6.70917869
Iteration 20, loss = 6.60794034
Iteration 21, loss = 6.60509759
Iteration 22, loss = 6.35380051
Iteration 23, loss = 6.85237076
Iteration 24, loss = 6.45484007
Iteration 25, loss = 6.35492804
Iteration 26, loss = 6.39450019
Iteration 27, loss = 6.04418355
Iteration 28, loss = 6.43166377
Iteration 29, loss = 6.17828369
Iteration 30, loss = 5.94322533
Iteration 31, loss = 6.06053751
Iteration 32, loss = 6.62904058
Iteration 33, loss = 6.16134719
Iteration 34, loss = 6.02470099
Iteration 35, loss = 6.06904854
Iteration 36, loss = 6.06647440
Iteration 37, loss = 6.11549773
Iteration 38, loss = 5.91276250
Iteration 39, loss = 5.97090981
Iteration 40, loss = 5.75892191
Iteration 41, loss = 5.85818073
Iteration 42, loss = 5.76619263
Iteration 43, loss = 5.69301352
Iteration 44, loss = 5.80973076
Iteration 45, loss = 5.89953304
Iteration 46, loss = 6.25110121
Iteration 47, loss = 5.19092336
Iteration 48, loss = 4.38549703
Iteration 49, loss = 4.96171027
Iteration 50, loss = 4.21857967
Iteration 51, loss = 5.28253218
Iteration 52, loss = 3.66763305
Iteration 53, loss = 3.99640897
Iteration 54, loss = 3.42827467
Iteration 55, loss = 4.69500593
Iteration 56, loss = 4.15032224
Iteration 57, loss = 3.40034114
Iteration 58, loss = 4.40729842
Iteration 59, loss = 4.74181838
Iteration 60, loss = 3.26931748
Iteration 61, loss = 4.25198932
Iteration 62, loss = 3.68719675
Iteration 63, loss = 3.93230307
Iteration 64, loss = 4.17868263
Iteration 65, loss = 4.17696336
Iteration 66, loss = 3.61093608
Iteration 67, loss = 4.33229789
Iteration 68, loss = 3.50173745
Iteration 69, loss = 4.19269383
Iteration 70, loss = 3.32131787
Iteration 71, loss = 3.18533518
Iteration 72, loss = 2.99739328
Iteration 73, loss = 3.74584578
Iteration 74, loss = 3.82697025
Iteration 75, loss = 3.64376008
Iteration 76, loss = 3.47941502
Iteration 77, loss = 3.85620020
Iteration 78, loss = 3.39516840
Iteration 79, loss = 3.69152623
Iteration 80, loss = 3.52693734
Iteration 81, loss = 4.08809516
Iteration 82, loss = 3.76720417
Iteration 83, loss = 3.58588712
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 23.08579120
Iteration 2, loss = 19.84998554
Iteration 3, loss = 14.07638184
Iteration 4, loss = 12.11615458
Iteration 5, loss = 11.68878058
Iteration 6, loss = 9.29073556
Iteration 7, loss = 7.24326251
Iteration 8, loss = 6.66726954
Iteration 9, loss = 6.93023843
Iteration 10, loss = 6.79319191
Iteration 11, loss = 6.55551248
Iteration 12, loss = 6.41842204
Iteration 13, loss = 6.73962762
Iteration 14, loss = 6.10059822
Iteration 15, loss = 6.35582587
Iteration 16, loss = 6.22403866
Iteration 17, loss = 6.16747059
Iteration 18, loss = 5.91193693
Iteration 19, loss = 4.97283635
Iteration 20, loss = 4.03797269
Iteration 21, loss = 3.47716532
Iteration 22, loss = 3.78125442
Iteration 23, loss = 3.58105437
Iteration 24, loss = 3.29626926
Iteration 25, loss = 2.98695251
Iteration 26, loss = 3.56097739
Iteration 27, loss = 3.42288079
Iteration 28, loss = 3.03192984
Iteration 29, loss = 2.96518246
Iteration 30, loss = 3.72683515
Iteration 31, loss = 2.96802437
Iteration 32, loss = 3.30980662
Iteration 33, loss = 3.32663556
Iteration 34, loss = 3.56211098
Iteration 35, loss = 3.10798300
Iteration 36, loss = 2.68886528
Iteration 37, loss = 2.89658442
Iteration 38, loss = 2.90028934
Iteration 39, loss = 3.14614338
Iteration 40, loss = 3.24479091
Iteration 41, loss = 3.36568807
Iteration 42, loss = 2.82391873
Iteration 43, loss = 3.00323891
Iteration 44, loss = 2.84804029
Iteration 45, loss = 2.79843075
Iteration 46, loss = 3.02158814
Iteration 47, loss = 2.73298052
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 17.79646153
Iteration 2, loss = 13.22743935
Iteration 3, loss = 9.06774952
Iteration 4, loss = 8.14752471
Iteration 5, loss = 7.27933198
Iteration 6, loss = 7.19458076
Iteration 7, loss = 6.95653274
Iteration 8, loss = 6.39286377
Iteration 9, loss = 6.35744106
Iteration 10, loss = 6.01903138
Iteration 11, loss = 5.72621781
Iteration 12, loss = 6.10315226
Iteration 13, loss = 5.38134615
Iteration 14, loss = 5.69215397
Iteration 15, loss = 5.33516051
Iteration 16, loss = 5.43203339
Iteration 17, loss = 5.26647443
Iteration 18, loss = 6.04901307
Iteration 19, loss = 5.81253034
Iteration 20, loss = 5.39229806
Iteration 21, loss = 5.22867978
Iteration 22, loss = 5.35163710
Iteration 23, loss = 5.38298626
Iteration 24, loss = 5.42604512
Iteration 25, loss = 5.47490320
Iteration 26, loss = 5.33710566
Iteration 27, loss = 4.57752773
Iteration 28, loss = 3.95498953
Iteration 29, loss = 3.72881052
Iteration 30, loss = 3.78999720
Iteration 31, loss = 3.69994457
Iteration 32, loss = 3.57881627
Iteration 33, loss = 3.82553441
Iteration 34, loss = 3.71322721
Iteration 35, loss = 4.32122277
Iteration 36, loss = 3.92737245
Iteration 37, loss = 3.02362157
Iteration 38, loss = 3.16488121
Iteration 39, loss = 3.27416451
Iteration 40, loss = 3.64469340
Iteration 41, loss = 3.55117034
Iteration 42, loss = 2.85481451
Iteration 43, loss = 3.76127734
Iteration 44, loss = 3.65227445
Iteration 45, loss = 3.34438905
Iteration 46, loss = 3.00325347
Iteration 47, loss = 3.59279464
Iteration 48, loss = 3.66790045
Iteration 49, loss = 3.27781680
Iteration 50, loss = 4.28142857
Iteration 51, loss = 3.46497507
Iteration 52, loss = 2.84671882
Iteration 53, loss = 3.48755710
Iteration 54, loss = 3.18113056
Iteration 55, loss = 3.34822106
Iteration 56, loss = 3.59979546
Iteration 57, loss = 3.43189982
Iteration 58, loss = 3.48378964
Iteration 59, loss = 2.87986933
Iteration 60, loss = 3.33823510
Iteration 61, loss = 3.29359634
Iteration 62, loss = 3.69729797
Iteration 63, loss = 3.36104010
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.47578595
Iteration 2, loss = 0.32728316
Iteration 3, loss = 0.27820208
Iteration 4, loss = 0.25975165
Iteration 5, loss = 0.25370002
Iteration 6, loss = 0.23401186
Iteration 7, loss = 0.20249016
Iteration 8, loss = 0.19275311
Iteration 9, loss = 0.20435351
Iteration 10, loss = 0.19333282
Iteration 11, loss = 0.16639748
Iteration 12, loss = 0.16754817
Iteration 13, loss = 0.15743637
Iteration 14, loss = 0.17274930
Iteration 15, loss = 0.18002123
Iteration 16, loss = 0.16860891
Iteration 17, loss = 0.17568066
Iteration 18, loss = 0.17360187
Iteration 19, loss = 0.17303979
Iteration 20, loss = 0.17201134
Iteration 21, loss = 0.17083043
Iteration 22, loss = 0.16976634
Iteration 23, loss = 0.16928726
Iteration 24, loss = 0.16859484
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.51610990
Iteration 2, loss = 0.37329402
Iteration 3, loss = 0.30907118
Iteration 4, loss = 0.27330753
Iteration 5, loss = 0.23738360
Iteration 6, loss = 0.20300079
Iteration 7, loss = 0.19202126
Iteration 8, loss = 0.19576168
Iteration 9, loss = 0.19210459
Iteration 10, loss = 0.19368181
Iteration 11, loss = 0.19014296
Iteration 12, loss = 0.19065805
Iteration 13, loss = 0.18823301
Iteration 14, loss = 0.18632414
Iteration 15, loss = 0.18431035
Iteration 16, loss = 0.18466510
Iteration 17, loss = 0.18391722
Iteration 18, loss = 0.18560620
Iteration 19, loss = 0.18260015
Iteration 20, loss = 0.18056174
Iteration 21, loss = 0.17921565
Iteration 22, loss = 0.17762939
Iteration 23, loss = 0.17146496
Iteration 24, loss = 0.16516515
Iteration 25, loss = 0.16405166
Iteration 26, loss = 0.17560588
Iteration 27, loss = 0.16710207
Iteration 28, loss = 0.16268187
Iteration 29, loss = 0.15790190
Iteration 30, loss = 0.15640741
Iteration 31, loss = 0.15220204
Iteration 32, loss = 0.15084997
Iteration 33, loss = 0.14692273
Iteration 34, loss = 0.14734421
Iteration 35, loss = 0.18060159
Iteration 36, loss = 0.18249612
Iteration 37, loss = 0.17958153
Iteration 38, loss = 0.17955634
Iteration 39, loss = 0.17603126
Iteration 40, loss = 0.17292667
Iteration 41, loss = 0.17157944
Iteration 42, loss = 0.17110963
Iteration 43, loss = 0.17017885
Iteration 44, loss = 0.16929894
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.43935397
Iteration 2, loss = 0.27837430
Iteration 3, loss = 0.24142615
Iteration 4, loss = 0.21344671
Iteration 5, loss = 0.19397772
Iteration 6, loss = 0.17952863
Iteration 7, loss = 0.17814251
Iteration 8, loss = 0.17953704
Iteration 9, loss = 0.17506301
Iteration 10, loss = 0.17797635
Iteration 11, loss = 0.18495717
Iteration 12, loss = 0.18043078
Iteration 13, loss = 0.17487519
Iteration 14, loss = 0.16909345
Iteration 15, loss = 0.16256221
Iteration 16, loss = 0.16900078
Iteration 17, loss = 0.16593519
Iteration 18, loss = 0.16821691
Iteration 19, loss = 0.16329134
Iteration 20, loss = 0.16201337
Iteration 21, loss = 0.16207924
Iteration 22, loss = 0.16052563
Iteration 23, loss = 0.15950914
Iteration 24, loss = 0.15685211
Iteration 25, loss = 0.15470285
Iteration 26, loss = 0.14952776
Iteration 27, loss = 0.15149977
Iteration 28, loss = 0.15444735
Iteration 29, loss = 0.16767860
Iteration 30, loss = 0.17161968
Iteration 31, loss = 0.16544242
Iteration 32, loss = 0.16345825
Iteration 33, loss = 0.17130730
Iteration 34, loss = 0.16638421
Iteration 35, loss = 0.16360211
Iteration 36, loss = 0.16459871
Iteration 37, loss = 0.16143931
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.75082756
Iteration 2, loss = 0.38439795
Iteration 3, loss = 0.28322073
Iteration 4, loss = 0.24144952
Iteration 5, loss = 0.20779361
Iteration 6, loss = 0.19151725
Iteration 7, loss = 0.17941371
Iteration 8, loss = 0.17697781
Iteration 9, loss = 0.17634071
Iteration 10, loss = 0.16703123
Iteration 11, loss = 0.16402429
Iteration 12, loss = 0.16096630
Iteration 13, loss = 0.16401041
Iteration 14, loss = 0.17515884
Iteration 15, loss = 0.17988499
Iteration 16, loss = 0.16970817
Iteration 17, loss = 0.16265915
Iteration 18, loss = 0.16861521
Iteration 19, loss = 0.16556231
Iteration 20, loss = 0.16597982
Iteration 21, loss = 0.16554680
Iteration 22, loss = 0.16358077
Iteration 23, loss = 0.16354815
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.47572521
Iteration 2, loss = 0.30102663
Iteration 3, loss = 0.24483041
Iteration 4, loss = 0.23902756
Iteration 5, loss = 0.20363807
Iteration 6, loss = 0.19259491
Iteration 7, loss = 0.18431709
Iteration 8, loss = 0.17474723
Iteration 9, loss = 0.17428052
Iteration 10, loss = 0.19351571
Iteration 11, loss = 0.21152007
Iteration 12, loss = 0.20070056
Iteration 13, loss = 0.20293315
Iteration 14, loss = 0.19279775
Iteration 15, loss = 0.18641997
Iteration 16, loss = 0.18082954
Iteration 17, loss = 0.17748550
Iteration 18, loss = 0.17336847
Iteration 19, loss = 0.19745140
Iteration 20, loss = 0.19128230
Iteration 21, loss = 0.18715171
Iteration 22, loss = 0.18080363
Iteration 23, loss = 0.19489943
Iteration 24, loss = 0.18507821
Iteration 25, loss = 0.18312482
Iteration 26, loss = 0.18683009
Iteration 27, loss = 0.18348475
Iteration 28, loss = 0.17996402
Iteration 29, loss = 0.18123938
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.74397769
Iteration 2, loss = 0.56253506
Iteration 3, loss = 0.48505214
Iteration 4, loss = 0.44920776
Iteration 5, loss = 0.40753098
Iteration 6, loss = 0.36733335
Iteration 7, loss = 0.35433066
Iteration 8, loss = 0.36059174
Iteration 9, loss = 0.35870765
Iteration 10, loss = 0.38774142
Iteration 11, loss = 0.35761862
Iteration 12, loss = 0.32081493
Iteration 13, loss = 0.30619070
Iteration 14, loss = 0.29910872
Iteration 15, loss = 0.28809658
Iteration 16, loss = 0.27195886
Iteration 17, loss = 0.27714892
Iteration 18, loss = 0.28201971
Iteration 19, loss = 0.26634339
Iteration 20, loss = 0.26863167
Iteration 21, loss = 0.24485462
Iteration 22, loss = 0.23094760
Iteration 23, loss = 0.23224884
Iteration 24, loss = 0.22573838
Iteration 25, loss = 0.23669808
Iteration 26, loss = 0.23373320
Iteration 27, loss = 0.23846809
Iteration 28, loss = 0.24970501
Iteration 29, loss = 0.24739953
Iteration 30, loss = 0.24927126
Iteration 31, loss = 0.24572001
Iteration 32, loss = 0.23978934
Iteration 33, loss = 0.24002421
Iteration 34, loss = 0.23610935
Iteration 35, loss = 0.23215957
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.59592964
Iteration 2, loss = 0.49117691
Iteration 3, loss = 0.41951803
Iteration 4, loss = 0.40381505
Iteration 5, loss = 0.38751221
Iteration 6, loss = 0.35720359
Iteration 7, loss = 0.34932954
Iteration 8, loss = 0.33472059
Iteration 9, loss = 0.32516112
Iteration 10, loss = 0.31266455
Iteration 11, loss = 0.31277738
Iteration 12, loss = 0.30808344
Iteration 13, loss = 0.30574787
Iteration 14, loss = 0.29963751
Iteration 15, loss = 0.31073360
Iteration 16, loss = 0.30202075
Iteration 17, loss = 0.28236070
Iteration 18, loss = 0.28032679
Iteration 19, loss = 0.27428916
Iteration 20, loss = 0.27471440
Iteration 21, loss = 0.27050065
Iteration 22, loss = 0.27468921
Iteration 23, loss = 0.28698984
Iteration 24, loss = 0.27521016
Iteration 25, loss = 0.27253595
Iteration 26, loss = 0.27209529
Iteration 27, loss = 0.26175275
Iteration 28, loss = 0.25316331
Iteration 29, loss = 0.24426590
Iteration 30, loss = 0.25878294
Iteration 31, loss = 0.24788952
Iteration 32, loss = 0.25101141
Iteration 33, loss = 0.24992860
Iteration 34, loss = 0.24272318
Iteration 35, loss = 0.23843043
Iteration 36, loss = 0.23901213
Iteration 37, loss = 0.23291581
Iteration 38, loss = 0.24262845
Iteration 39, loss = 0.23912698
Iteration 40, loss = 0.23845934
Iteration 41, loss = 0.23434897
Iteration 42, loss = 0.22726889
Iteration 43, loss = 0.21667453
Iteration 44, loss = 0.21959277
Iteration 45, loss = 0.21876190
Iteration 46, loss = 0.22653350
Iteration 47, loss = 0.24198381
Iteration 48, loss = 0.23089038
Iteration 49, loss = 0.22976338
Iteration 50, loss = 0.23201433
Iteration 51, loss = 0.22867407
Iteration 52, loss = 0.22819143
Iteration 53, loss = 0.23532910
Iteration 54, loss = 0.22945633
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.59284085
Iteration 2, loss = 0.48615681
Iteration 3, loss = 0.42978934
Iteration 4, loss = 0.40682094
Iteration 5, loss = 0.35481880
Iteration 6, loss = 0.33386794
Iteration 7, loss = 0.30378348
Iteration 8, loss = 0.27597604
Iteration 9, loss = 0.25667966
Iteration 10, loss = 0.26500551
Iteration 11, loss = 0.26630379
Iteration 12, loss = 0.27082286
Iteration 13, loss = 0.25500919
Iteration 14, loss = 0.24435557
Iteration 15, loss = 0.22364928
Iteration 16, loss = 0.22339791
Iteration 17, loss = 0.24834872
Iteration 18, loss = 0.23858423
Iteration 19, loss = 0.23051004
Iteration 20, loss = 0.22976565
Iteration 21, loss = 0.21227225
Iteration 22, loss = 0.21056440
Iteration 23, loss = 0.21946407
Iteration 24, loss = 0.22626185
Iteration 25, loss = 0.23291585
Iteration 26, loss = 0.22689449
Iteration 27, loss = 0.22740653
Iteration 28, loss = 0.24799305
Iteration 29, loss = 0.22312655
Iteration 30, loss = 0.23332051
Iteration 31, loss = 0.22542361
Iteration 32, loss = 0.22147720
Iteration 33, loss = 0.21957834
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.64847340
Iteration 2, loss = 0.54223069
Iteration 3, loss = 0.47640723
Iteration 4, loss = 0.42633641
Iteration 5, loss = 0.38139689
Iteration 6, loss = 0.38364204
Iteration 7, loss = 0.37159027
Iteration 8, loss = 0.35689483
Iteration 9, loss = 0.34104379
Iteration 10, loss = 0.31565057
Iteration 11, loss = 0.27857190
Iteration 12, loss = 0.26802182
Iteration 13, loss = 0.26093888
Iteration 14, loss = 0.25738997
Iteration 15, loss = 0.26224777
Iteration 16, loss = 0.24805424
Iteration 17, loss = 0.23553668
Iteration 18, loss = 0.22437686
Iteration 19, loss = 0.22331541
Iteration 20, loss = 0.23228775
Iteration 21, loss = 0.23572663
Iteration 22, loss = 0.22331180
Iteration 23, loss = 0.22164201
Iteration 24, loss = 0.21540667
Iteration 25, loss = 0.21146411
Iteration 26, loss = 0.20684859
Iteration 27, loss = 0.20718583
Iteration 28, loss = 0.20973312
Iteration 29, loss = 0.20744682
Iteration 30, loss = 0.20977350
Iteration 31, loss = 0.20696642
Iteration 32, loss = 0.20361357
Iteration 33, loss = 0.20194722
Iteration 34, loss = 0.19692209
Iteration 35, loss = 0.19621271
Iteration 36, loss = 0.19956482
Iteration 37, loss = 0.19445957
Iteration 38, loss = 0.18656254
Iteration 39, loss = 0.18246096
Iteration 40, loss = 0.18303630
Iteration 41, loss = 0.18902483
Iteration 42, loss = 0.19481063
Iteration 43, loss = 0.19236075
Iteration 44, loss = 0.19153404
Iteration 45, loss = 0.19193339
Iteration 46, loss = 0.19519071
Iteration 47, loss = 0.18833132
Iteration 48, loss = 0.18435013
Iteration 49, loss = 0.17999488
Iteration 50, loss = 0.18132177
Iteration 51, loss = 0.18279269
Iteration 52, loss = 0.19096518
Iteration 53, loss = 0.19598954
Iteration 54, loss = 0.19530827
Iteration 55, loss = 0.19084412
Iteration 56, loss = 0.18535381
Iteration 57, loss = 0.18245208
Iteration 58, loss = 0.18207207
Iteration 59, loss = 0.18150748
Iteration 60, loss = 0.17944154
Iteration 61, loss = 0.18170670
Iteration 62, loss = 0.18070371
Iteration 63, loss = 0.21648179
Iteration 64, loss = 0.22787675
Iteration 65, loss = 0.22328138
Iteration 66, loss = 0.22310029
Iteration 67, loss = 0.23112642
Iteration 68, loss = 0.21949722
Iteration 69, loss = 0.23131255
Iteration 70, loss = 0.22235874
Iteration 71, loss = 0.21764521
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.73461735
Iteration 2, loss = 0.58191156
Iteration 3, loss = 0.51065669
Iteration 4, loss = 0.44911401
Iteration 5, loss = 0.43150092
Iteration 6, loss = 0.41732881
Iteration 7, loss = 0.36416250
Iteration 8, loss = 0.36148126
Iteration 9, loss = 0.36123858
Iteration 10, loss = 0.33674183
Iteration 11, loss = 0.30529645
Iteration 12, loss = 0.31037979
Iteration 13, loss = 0.29576671
Iteration 14, loss = 0.26924118
Iteration 15, loss = 0.28708937
Iteration 16, loss = 0.27862287
Iteration 17, loss = 0.26033645
Iteration 18, loss = 0.29510671
Iteration 19, loss = 0.29197343
Iteration 20, loss = 0.28467762
Iteration 21, loss = 0.27771696
Iteration 22, loss = 0.30976916
Iteration 23, loss = 0.31404885
Iteration 24, loss = 0.31572447
Iteration 25, loss = 0.30343733
Iteration 26, loss = 0.29747266
Iteration 27, loss = 0.29573926
Iteration 28, loss = 0.28005427
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 8.11020781
Iteration 2, loss = 4.77806030
Iteration 3, loss = 4.05990583
Iteration 4, loss = 3.26155069
Iteration 5, loss = 3.32517788
Iteration 6, loss = 3.84524363
Iteration 7, loss = 3.22467851
Iteration 8, loss = 2.80899324
Iteration 9, loss = 3.42506922
Iteration 10, loss = 3.80914439
Iteration 11, loss = 3.07236309
Iteration 12, loss = 2.89488642
Iteration 13, loss = 2.67329059
Iteration 14, loss = 3.38191618
Iteration 15, loss = 3.34359188
Iteration 16, loss = 3.15710830
Iteration 17, loss = 4.17132680
Iteration 18, loss = 3.40556279
Iteration 19, loss = 3.52414559
Iteration 20, loss = 2.90610157
Iteration 21, loss = 3.43947662
Iteration 22, loss = 3.27468542
Iteration 23, loss = 2.83660647
Iteration 24, loss = 4.35387306
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 12.07401189
Iteration 2, loss = 6.68313354
Iteration 3, loss = 4.01151716
Iteration 4, loss = 3.87012245
Iteration 5, loss = 2.67471550
Iteration 6, loss = 5.18561498
Iteration 7, loss = 3.13362985
Iteration 8, loss = 2.80704308
Iteration 9, loss = 3.59090121
Iteration 10, loss = 2.56782404
Iteration 11, loss = 2.77322808
Iteration 12, loss = 3.59280946
Iteration 13, loss = 2.33930371
Iteration 14, loss = 2.83551904
Iteration 15, loss = 2.60727384
Iteration 16, loss = 2.89593908
Iteration 17, loss = 4.67565844
Iteration 18, loss = 3.13245077
Iteration 19, loss = 2.51522171
Iteration 20, loss = 3.82697135
Iteration 21, loss = 3.09075352
Iteration 22, loss = 2.63441698
Iteration 23, loss = 3.21437659
Iteration 24, loss = 4.61199053
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 8.50356202
Iteration 2, loss = 4.71836926
Iteration 3, loss = 3.77568497
Iteration 4, loss = 4.20884322
Iteration 5, loss = 3.82410058
Iteration 6, loss = 3.72289240
Iteration 7, loss = 3.84012931
Iteration 8, loss = 3.36689533
Iteration 9, loss = 4.13049818
Iteration 10, loss = 3.33708548
Iteration 11, loss = 3.52770251
Iteration 12, loss = 3.01656697
Iteration 13, loss = 3.59408104
Iteration 14, loss = 3.18557966
Iteration 15, loss = 3.45420760
Iteration 16, loss = 3.83201053
Iteration 17, loss = 3.65127902
Iteration 18, loss = 2.76440668
Iteration 19, loss = 3.47462314
Iteration 20, loss = 2.97799028
Iteration 21, loss = 3.51287470
Iteration 22, loss = 3.11762026
Iteration 23, loss = 3.34533410
Iteration 24, loss = 2.84811010
Iteration 25, loss = 2.12870602
Iteration 26, loss = 2.19811323
Iteration 27, loss = 3.43825954
Iteration 28, loss = 3.47465107
Iteration 29, loss = 2.58393752
Iteration 30, loss = 2.86521715
Iteration 31, loss = 2.72936028
Iteration 32, loss = 2.84613439
Iteration 33, loss = 3.73825056
Iteration 34, loss = 2.48903673
Iteration 35, loss = 3.34340223
Iteration 36, loss = 2.92142095
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 9.75945316
Iteration 2, loss = 6.51106577
Iteration 3, loss = 3.60372905
Iteration 4, loss = 3.17827753
Iteration 5, loss = 3.39846152
Iteration 6, loss = 3.16985250
Iteration 7, loss = 2.69107295
Iteration 8, loss = 2.61720976
Iteration 9, loss = 2.68062084
Iteration 10, loss = 2.57705006
Iteration 11, loss = 2.90988925
Iteration 12, loss = 2.31847826
Iteration 13, loss = 2.15388757
Iteration 14, loss = 3.14061675
Iteration 15, loss = 2.66480401
Iteration 16, loss = 2.95862026
Iteration 17, loss = 2.37624149
Iteration 18, loss = 3.24241171
Iteration 19, loss = 2.78515074
Iteration 20, loss = 2.07360011
Iteration 21, loss = 2.71930971
Iteration 22, loss = 2.79303281
Iteration 23, loss = 2.77882775
Iteration 24, loss = 2.68516632
Iteration 25, loss = 3.36833527
Iteration 26, loss = 2.05717365
Iteration 27, loss = 3.03583646
Iteration 28, loss = 2.37139700
Iteration 29, loss = 2.62894195
Iteration 30, loss = 2.45814292
Iteration 31, loss = 2.28402897
Iteration 32, loss = 3.42484518
Iteration 33, loss = 2.39440428
Iteration 34, loss = 2.98981053
Iteration 35, loss = 2.69514261
Iteration 36, loss = 2.65343014
Iteration 37, loss = 2.91301047
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 5.42988775
Iteration 2, loss = 4.93581497
Iteration 3, loss = 4.06572519
Iteration 4, loss = 3.53720708
Iteration 5, loss = 3.44777194
Iteration 6, loss = 2.86293962
Iteration 7, loss = 3.73885565
Iteration 8, loss = 4.23602600
Iteration 9, loss = 3.30537844
Iteration 10, loss = 3.99696289
Iteration 11, loss = 3.50163117
Iteration 12, loss = 3.49837485
Iteration 13, loss = 2.51916396
Iteration 14, loss = 3.16823035
Iteration 15, loss = 3.64446392
Iteration 16, loss = 3.32383305
Iteration 17, loss = 3.44307251
Iteration 18, loss = 3.01103388
Iteration 19, loss = 2.37452987
Iteration 20, loss = 4.22553911
Iteration 21, loss = 3.22152312
Iteration 22, loss = 2.66381869
Iteration 23, loss = 2.34718948
Iteration 24, loss = 2.77114706
Iteration 25, loss = 3.36655344
Iteration 26, loss = 5.16855004
Iteration 27, loss = 2.43461687
Iteration 28, loss = 2.78631397
Iteration 29, loss = 4.42969793
Iteration 30, loss = 2.90658020
Iteration 31, loss = 3.31361531
Iteration 32, loss = 3.22511259
Iteration 33, loss = 3.02411873
Iteration 34, loss = 3.40990706
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.55392728
Iteration 2, loss = 0.40532148
Iteration 3, loss = 0.34304005
Iteration 4, loss = 0.30931365
Iteration 5, loss = 0.29659464
Iteration 6, loss = 0.25997716
Iteration 7, loss = 0.26969235
Iteration 8, loss = 0.26936639
Iteration 9, loss = 0.26565009
Iteration 10, loss = 0.26386338
Iteration 11, loss = 0.25432027
Iteration 12, loss = 0.25160662
Iteration 13, loss = 0.24486004
Iteration 14, loss = 0.24861770
Iteration 15, loss = 0.23922344
Iteration 16, loss = 0.22466967
Iteration 17, loss = 0.22571846
Iteration 18, loss = 0.22403485
Iteration 19, loss = 0.23283857
Iteration 20, loss = 0.22661480
Iteration 21, loss = 0.23293073
Iteration 22, loss = 0.22378211
Iteration 23, loss = 0.22066255
Iteration 24, loss = 0.21834810
Iteration 25, loss = 0.22062434
Iteration 26, loss = 0.22019896
Iteration 27, loss = 0.21480114
Iteration 28, loss = 0.20979687
Iteration 29, loss = 0.20217763
Iteration 30, loss = 0.20730808
Iteration 31, loss = 0.23033079
Iteration 32, loss = 0.23809952
Iteration 33, loss = 0.23434323
Iteration 34, loss = 0.23775507
Iteration 35, loss = 0.23349187
Iteration 36, loss = 0.23777629
Iteration 37, loss = 0.23356789
Iteration 38, loss = 0.23150579
Iteration 39, loss = 0.24541435
Iteration 40, loss = 0.24342457
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.54844890
Iteration 2, loss = 0.41490016
Iteration 3, loss = 0.33846672
Iteration 4, loss = 0.32117569
Iteration 5, loss = 0.29018145
Iteration 6, loss = 0.26359292
Iteration 7, loss = 0.25375351
Iteration 8, loss = 0.25359205
Iteration 9, loss = 0.26631805
Iteration 10, loss = 0.28636266
Iteration 11, loss = 0.27404865
Iteration 12, loss = 0.28488554
Iteration 13, loss = 0.25851420
Iteration 14, loss = 0.24078734
Iteration 15, loss = 0.22266351
Iteration 16, loss = 0.22972408
Iteration 17, loss = 0.24296695
Iteration 18, loss = 0.25219113
Iteration 19, loss = 0.23789476
Iteration 20, loss = 0.22591165
Iteration 21, loss = 0.22770471
Iteration 22, loss = 0.22646237
Iteration 23, loss = 0.23084966
Iteration 24, loss = 0.22262910
Iteration 25, loss = 0.23065754
Iteration 26, loss = 0.22598565
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.52318464
Iteration 2, loss = 0.39952954
Iteration 3, loss = 0.33304795
Iteration 4, loss = 0.29194265
Iteration 5, loss = 0.28715862
Iteration 6, loss = 0.27779297
Iteration 7, loss = 0.26650146
Iteration 8, loss = 0.23896679
Iteration 9, loss = 0.23396095
Iteration 10, loss = 0.22240023
Iteration 11, loss = 0.21318005
Iteration 12, loss = 0.20778337
Iteration 13, loss = 0.20839528
Iteration 14, loss = 0.19958848
Iteration 15, loss = 0.20723871
Iteration 16, loss = 0.21581417
Iteration 17, loss = 0.22053306
Iteration 18, loss = 0.23701852
Iteration 19, loss = 0.22488595
Iteration 20, loss = 0.23366795
Iteration 21, loss = 0.22056692
Iteration 22, loss = 0.24697737
Iteration 23, loss = 0.23563657
Iteration 24, loss = 0.23743779
Iteration 25, loss = 0.23156896
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.60054153
Iteration 2, loss = 0.39337266
Iteration 3, loss = 0.34040378
Iteration 4, loss = 0.30304851
Iteration 5, loss = 0.30242607
Iteration 6, loss = 0.28103128
Iteration 7, loss = 0.27380927
Iteration 8, loss = 0.27963784
Iteration 9, loss = 0.26686501
Iteration 10, loss = 0.25295109
Iteration 11, loss = 0.22914235
Iteration 12, loss = 0.21908244
Iteration 13, loss = 0.23597403
Iteration 14, loss = 0.24833853
Iteration 15, loss = 0.23757695
Iteration 16, loss = 0.23294330
Iteration 17, loss = 0.24478435
Iteration 18, loss = 0.23248306
Iteration 19, loss = 0.24617509
Iteration 20, loss = 0.24657563
Iteration 21, loss = 0.24750880
Iteration 22, loss = 0.22234464
Iteration 23, loss = 0.22498046
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.64154442
Iteration 2, loss = 0.43669107
Iteration 3, loss = 0.35782221
Iteration 4, loss = 0.31310552
Iteration 5, loss = 0.27376377
Iteration 6, loss = 0.25178679
Iteration 7, loss = 0.22698499
Iteration 8, loss = 0.22806351
Iteration 9, loss = 0.23057008
Iteration 10, loss = 0.22003343
Iteration 11, loss = 0.23208821
Iteration 12, loss = 0.22861965
Iteration 13, loss = 0.22902276
Iteration 14, loss = 0.23224402
Iteration 15, loss = 0.24194693
Iteration 16, loss = 0.23544341
Iteration 17, loss = 0.21950321
Iteration 18, loss = 0.21427939
Iteration 19, loss = 0.23634324
Iteration 20, loss = 0.27757748
Iteration 21, loss = 0.26410998
Iteration 22, loss = 0.24938329
Iteration 23, loss = 0.25585734
Iteration 24, loss = 0.24862211
Iteration 25, loss = 0.24825351
Iteration 26, loss = 0.24350938
Iteration 27, loss = 0.23309598
Iteration 28, loss = 0.22658924
Iteration 29, loss = 0.23470530
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 1.50463881
Iteration 2, loss = 1.33052840
Iteration 3, loss = 1.17841256
Iteration 4, loss = 1.01776524
Iteration 5, loss = 0.81396552
Iteration 6, loss = 0.58754603
Iteration 7, loss = 0.42113518
Iteration 8, loss = 0.39362978
Iteration 9, loss = 0.38390235
Iteration 10, loss = 0.37824888
Iteration 11, loss = 0.37009740
Iteration 12, loss = 0.36410872
Iteration 13, loss = 0.35958414
Iteration 14, loss = 0.35725443
Iteration 15, loss = 0.35229489
Iteration 16, loss = 0.35213271
Iteration 17, loss = 0.34577152
Iteration 18, loss = 0.34518612
Iteration 19, loss = 0.34234956
Iteration 20, loss = 0.33823204
Iteration 21, loss = 0.33777479
Iteration 22, loss = 0.33246767
Iteration 23, loss = 0.32101833
Iteration 24, loss = 0.31893575
Iteration 25, loss = 0.31392266
Iteration 26, loss = 0.31118964
Iteration 27, loss = 0.30768428
Iteration 28, loss = 0.30995722
Iteration 29, loss = 0.31137108
Iteration 30, loss = 0.30268860
Iteration 31, loss = 0.30285110
Iteration 32, loss = 0.30330247
Iteration 33, loss = 0.29367344
Iteration 34, loss = 0.29086157
Iteration 35, loss = 0.28654094
Iteration 36, loss = 0.87801480
Iteration 37, loss = 1.05795818
Iteration 38, loss = 0.75345026
Iteration 39, loss = 0.57430341
Iteration 40, loss = 0.50425882
Iteration 41, loss = 0.48570232
Iteration 42, loss = 0.47853151
Iteration 43, loss = 0.47385797
Iteration 44, loss = 0.47035905
Iteration 45, loss = 0.46763439
Iteration 46, loss = 0.46538315
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 17.97093715
Iteration 2, loss = 17.30478929
Iteration 3, loss = 16.68758095
Iteration 4, loss = 16.29113589
Iteration 5, loss = 16.03099349
Iteration 6, loss = 15.79623680
Iteration 7, loss = 15.60119284
Iteration 8, loss = 15.50400776
Iteration 9, loss = 15.34044440
Iteration 10, loss = 15.29161927
Iteration 11, loss = 15.24316770
Iteration 12, loss = 9.54802673
Iteration 13, loss = 4.52140862
Iteration 14, loss = 4.27619639
Iteration 15, loss = 3.60341238
Iteration 16, loss = 5.14355652
Iteration 17, loss = 3.71442269
Iteration 18, loss = 4.20081035
Iteration 19, loss = 5.22833326
Iteration 20, loss = 4.24856380
Iteration 21, loss = 4.50553418
Iteration 22, loss = 4.38910856
Iteration 23, loss = 4.75531438
Iteration 24, loss = 4.43505827
Iteration 25, loss = 4.62805357
Iteration 26, loss = 5.10742517
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 9.43317754
Iteration 2, loss = 9.38410153
Iteration 3, loss = 9.67508970
Iteration 4, loss = 8.58315310
Iteration 5, loss = 5.65294107
Iteration 6, loss = 6.19831903
Iteration 7, loss = 5.34097057
Iteration 8, loss = 4.85973784
Iteration 9, loss = 4.98617079
Iteration 10, loss = 4.90083785
Iteration 11, loss = 5.01271473
Iteration 12, loss = 4.91212812
Iteration 13, loss = 4.97901307
Iteration 14, loss = 4.92744175
Iteration 15, loss = 4.80999877
Iteration 16, loss = 4.74363268
Iteration 17, loss = 4.88185365
Iteration 18, loss = 4.77846414
Iteration 19, loss = 4.71900583
Iteration 20, loss = 4.80098150
Iteration 21, loss = 4.75765813
Iteration 22, loss = 4.69578212
Iteration 23, loss = 4.63038141
Iteration 24, loss = 4.70976192
Iteration 25, loss = 4.64741221
Iteration 26, loss = 4.67258068
Iteration 27, loss = 4.65419971
Iteration 28, loss = 4.72793741
Iteration 29, loss = 4.74526261
Iteration 30, loss = 4.67585616
Iteration 31, loss = 4.59311198
Iteration 32, loss = 4.72718394
Iteration 33, loss = 4.74722085
Iteration 34, loss = 4.49164740
Iteration 35, loss = 4.73318588
Iteration 36, loss = 4.53412057
Iteration 37, loss = 4.79345632
Iteration 38, loss = 4.49021560
Iteration 39, loss = 4.69173318
Iteration 40, loss = 4.59684374
Iteration 41, loss = 4.57616244
Iteration 42, loss = 4.38553806
Iteration 43, loss = 4.48364414
Iteration 44, loss = 4.49804647
Iteration 45, loss = 4.39002447
Iteration 46, loss = 4.46357397
Iteration 47, loss = 4.42323269
Iteration 48, loss = 4.38804937
Iteration 49, loss = 4.44065554
Iteration 50, loss = 4.45915007
Iteration 51, loss = 4.52237880
Iteration 52, loss = 4.71301622
Iteration 53, loss = 4.31141373
Iteration 54, loss = 4.44281866
Iteration 55, loss = 4.51962474
Iteration 56, loss = 4.32412767
Iteration 57, loss = 4.34590439
Iteration 58, loss = 4.45616206
Iteration 59, loss = 4.28264342
Iteration 60, loss = 4.51450252
Iteration 61, loss = 4.40071475
Iteration 62, loss = 4.37549614
Iteration 63, loss = 4.17363017
Iteration 64, loss = 4.39266177
Iteration 65, loss = 4.25575286
Iteration 66, loss = 4.13365067
Iteration 67, loss = 4.38870000
Iteration 68, loss = 4.44878303
Iteration 69, loss = 4.20325606
Iteration 70, loss = 4.42772526
Iteration 71, loss = 4.36071056
Iteration 72, loss = 4.32782711
Iteration 73, loss = 4.32360138
Iteration 74, loss = 4.33283064
Iteration 75, loss = 4.12225233
Iteration 76, loss = 4.29435994
Iteration 77, loss = 4.18113368
Iteration 78, loss = 4.28637261
Iteration 79, loss = 4.24021481
Iteration 80, loss = 4.26927472
Iteration 81, loss = 4.20466908
Iteration 82, loss = 4.25492255
Iteration 83, loss = 4.24786809
Iteration 84, loss = 4.03355634
Iteration 85, loss = 4.18734687
Iteration 86, loss = 3.99184893
Iteration 87, loss = 4.20791177
Iteration 88, loss = 4.23806360
Iteration 89, loss = 4.17128158
Iteration 90, loss = 4.29547559
Iteration 91, loss = 4.10340922
Iteration 92, loss = 4.02775010
Iteration 93, loss = 4.11818111
Iteration 94, loss = 4.12569563
Iteration 95, loss = 4.07585182
Iteration 96, loss = 4.06417753
Iteration 97, loss = 4.34346179
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 12.55112827
Iteration 2, loss = 12.15381834
Iteration 3, loss = 9.26025593
Iteration 4, loss = 8.89686590
Iteration 5, loss = 8.66240300
Iteration 6, loss = 9.77745506
Iteration 7, loss = 9.98043285
Iteration 8, loss = 9.56632893
Iteration 9, loss = 9.44720328
Iteration 10, loss = 9.13018600
Iteration 11, loss = 8.33328777
Iteration 12, loss = 5.78798858
Iteration 13, loss = 5.06730120
Iteration 14, loss = 4.83949336
Iteration 15, loss = 4.88580674
Iteration 16, loss = 4.71425894
Iteration 17, loss = 4.53844948
Iteration 18, loss = 4.48173379
Iteration 19, loss = 4.48980103
Iteration 20, loss = 4.39802419
Iteration 21, loss = 4.31026539
Iteration 22, loss = 4.37178340
Iteration 23, loss = 4.61281259
Iteration 24, loss = 4.27320814
Iteration 25, loss = 4.47889559
Iteration 26, loss = 4.43788575
Iteration 27, loss = 4.40571765
Iteration 28, loss = 4.33623720
Iteration 29, loss = 4.43339201
Iteration 30, loss = 4.26305832
Iteration 31, loss = 4.21338221
Iteration 32, loss = 4.35812360
Iteration 33, loss = 4.46509039
Iteration 34, loss = 4.26014334
Iteration 35, loss = 4.27728495
Iteration 36, loss = 4.31040547
Iteration 37, loss = 4.13008873
Iteration 38, loss = 4.22519939
Iteration 39, loss = 4.32113164
Iteration 40, loss = 4.30287679
Iteration 41, loss = 4.11997920
Iteration 42, loss = 4.13902618
Iteration 43, loss = 4.15585260
Iteration 44, loss = 4.15750631
Iteration 45, loss = 4.47376065
Iteration 46, loss = 4.22269218
Iteration 47, loss = 4.36171567
Iteration 48, loss = 4.04248937
Iteration 49, loss = 4.15639733
Iteration 50, loss = 4.19433609
Iteration 51, loss = 4.12235561
Iteration 52, loss = 3.97123175
Iteration 53, loss = 3.96541589
Iteration 54, loss = 4.08213561
Iteration 55, loss = 4.03601750
Iteration 56, loss = 4.19903757
Iteration 57, loss = 4.01871430
Iteration 58, loss = 3.95146242
Iteration 59, loss = 3.98116589
Iteration 60, loss = 3.87337547
Iteration 61, loss = 3.86702541
Iteration 62, loss = 3.92172002
Iteration 63, loss = 3.84472787
Iteration 64, loss = 3.87648580
Iteration 65, loss = 3.81976668
Iteration 66, loss = 3.78623464
Iteration 67, loss = 3.97856720
Iteration 68, loss = 3.86235317
Iteration 69, loss = 3.71339285
Iteration 70, loss = 3.77943744
Iteration 71, loss = 3.90527144
Iteration 72, loss = 3.92212858
Iteration 73, loss = 3.67028456
Iteration 74, loss = 4.15108223
Iteration 75, loss = 3.66852071
Iteration 76, loss = 3.92170991
Iteration 77, loss = 3.98778291
Iteration 78, loss = 3.98867233
Iteration 79, loss = 4.08031416
Iteration 80, loss = 3.81652459
Iteration 81, loss = 3.88264484
Iteration 82, loss = 3.59829386
Iteration 83, loss = 3.31482123
Iteration 84, loss = 2.18196688
Iteration 85, loss = 1.82457496
Iteration 86, loss = 1.88519201
Iteration 87, loss = 1.82118088
Iteration 88, loss = 1.81652378
Iteration 89, loss = 1.78217728
Iteration 90, loss = 1.84392228
Iteration 91, loss = 1.76361470
Iteration 92, loss = 1.77109636
Iteration 93, loss = 1.79403374
Iteration 94, loss = 1.74992298
Iteration 95, loss = 1.77238141
Iteration 96, loss = 1.73067460
Iteration 97, loss = 1.70051134
Iteration 98, loss = 1.63477501
Iteration 99, loss = 1.61466139
Iteration 100, loss = 1.68515561
Iteration 101, loss = 1.61261389
Iteration 102, loss = 1.50712847
Iteration 103, loss = 1.41577785
Iteration 104, loss = 1.56460322
Iteration 105, loss = 1.60037939
Iteration 106, loss = 1.52953667
Iteration 107, loss = 1.52906873
Iteration 108, loss = 1.41761680
Iteration 109, loss = 1.53995646
Iteration 110, loss = 1.47241092
Iteration 111, loss = 1.53764834
Iteration 112, loss = 1.41901894
Iteration 113, loss = 1.64097185
Iteration 114, loss = 1.56273745
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 18.00765652
Iteration 2, loss = 15.99807022
Iteration 3, loss = 9.56870924
Iteration 4, loss = 5.87503188
Iteration 5, loss = 5.10196900
Iteration 6, loss = 4.73729342
Iteration 7, loss = 4.29257341
Iteration 8, loss = 3.83702618
Iteration 9, loss = 3.69263274
Iteration 10, loss = 3.66250438
Iteration 11, loss = 3.60606350
Iteration 12, loss = 3.69755571
Iteration 13, loss = 3.63684852
Iteration 14, loss = 3.68745855
Iteration 15, loss = 3.77642958
Iteration 16, loss = 3.77768794
Iteration 17, loss = 3.79760745
Iteration 18, loss = 3.82835129
Iteration 19, loss = 3.81036970
Iteration 20, loss = 3.96172246
Iteration 21, loss = 4.03722556
Iteration 22, loss = 3.97878860
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 13.93795933
Iteration 2, loss = 14.72626776
Iteration 3, loss = 12.86511052
Iteration 4, loss = 10.19250376
Iteration 5, loss = 7.62404455
Iteration 6, loss = 6.28435123
Iteration 7, loss = 6.13987326
Iteration 8, loss = 5.54120524
Iteration 9, loss = 5.84088565
Iteration 10, loss = 5.80544006
Iteration 11, loss = 6.17118786
Iteration 12, loss = 4.23722314
Iteration 13, loss = 3.68014604
Iteration 14, loss = 3.18772765
Iteration 15, loss = 3.03968377
Iteration 16, loss = 3.78747050
Iteration 17, loss = 3.20618270
Iteration 18, loss = 3.90025466
Iteration 19, loss = 4.21155402
Iteration 20, loss = 3.80297479
Iteration 21, loss = 4.08500739
Iteration 22, loss = 3.94315306
Iteration 23, loss = 4.61042385
Iteration 24, loss = 3.94636509
Iteration 25, loss = 4.51795000
Iteration 26, loss = 3.82058012
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 13.69404226
Iteration 2, loss = 12.98998587
Iteration 3, loss = 13.07919624
Iteration 4, loss = 12.64747418
Iteration 5, loss = 12.65266711
Iteration 6, loss = 13.29999208
Iteration 7, loss = 13.47045739
Iteration 8, loss = 13.71248028
Iteration 9, loss = 14.02442385
Iteration 10, loss = 13.64871196
Iteration 11, loss = 13.99174668
Iteration 12, loss = 13.59745472
Iteration 13, loss = 10.73344805
Iteration 14, loss = 7.33736004
Iteration 15, loss = 5.11862398
Iteration 16, loss = 4.72810944
Iteration 17, loss = 3.94741850
Iteration 18, loss = 4.68206325
Iteration 19, loss = 3.79842694
Iteration 20, loss = 4.56420088
Iteration 21, loss = 3.66995924
Iteration 22, loss = 3.94621291
Iteration 23, loss = 4.35815127
Iteration 24, loss = 3.75080348
Iteration 25, loss = 3.71976349
Iteration 26, loss = 4.05659775
Iteration 27, loss = 4.11655412
Iteration 28, loss = 3.69114231
Iteration 29, loss = 3.37592885
Iteration 30, loss = 4.58247443
Iteration 31, loss = 4.24653135
Iteration 32, loss = 3.99657316
Iteration 33, loss = 4.28252109
Iteration 34, loss = 3.71955374
Iteration 35, loss = 3.98286230
Iteration 36, loss = 3.86742896
Iteration 37, loss = 3.70677118
Iteration 38, loss = 3.47723241
Iteration 39, loss = 4.20922609
Iteration 40, loss = 4.44779691
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 10.45292380
Iteration 2, loss = 10.34309959
Iteration 3, loss = 8.54196654
Iteration 4, loss = 7.72295186
Iteration 5, loss = 7.14433002
Iteration 6, loss = 6.61951533
Iteration 7, loss = 6.28686085
Iteration 8, loss = 6.37164918
Iteration 9, loss = 6.17913902
Iteration 10, loss = 5.75260515
Iteration 11, loss = 6.30242903
Iteration 12, loss = 6.11546022
Iteration 13, loss = 5.78833698
Iteration 14, loss = 5.64633732
Iteration 15, loss = 5.65233592
Iteration 16, loss = 5.85316300
Iteration 17, loss = 5.61118396
Iteration 18, loss = 5.32603255
Iteration 19, loss = 5.62116622
Iteration 20, loss = 5.83303056
Iteration 21, loss = 6.07887134
Iteration 22, loss = 5.64472137
Iteration 23, loss = 5.48645590
Iteration 24, loss = 5.78914706
Iteration 25, loss = 5.68411738
Iteration 26, loss = 5.49029438
Iteration 27, loss = 5.36120885
Iteration 28, loss = 5.28394534
Iteration 29, loss = 5.74940448
Iteration 30, loss = 5.84943004
Iteration 31, loss = 5.54570117
Iteration 32, loss = 5.37947509
Iteration 33, loss = 5.65846442
Iteration 34, loss = 5.70898076
Iteration 35, loss = 5.60282980
Iteration 36, loss = 5.31773628
Iteration 37, loss = 5.44813820
Iteration 38, loss = 5.58092373
Iteration 39, loss = 5.33951805
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 17.13888399
Iteration 2, loss = 17.47941059
Iteration 3, loss = 13.25629569
Iteration 4, loss = 12.13759596
Iteration 5, loss = 10.57498646
Iteration 6, loss = 9.17731351
Iteration 7, loss = 8.41192443
Iteration 8, loss = 8.06599936
Iteration 9, loss = 7.31233372
Iteration 10, loss = 6.86026891
Iteration 11, loss = 6.45904329
Iteration 12, loss = 5.55865197
Iteration 13, loss = 5.60624426
Iteration 14, loss = 5.58795781
Iteration 15, loss = 5.49746436
Iteration 16, loss = 5.38898853
Iteration 17, loss = 5.50276254
Iteration 18, loss = 5.58513310
Iteration 19, loss = 5.61450585
Iteration 20, loss = 5.46395571
Iteration 21, loss = 5.51152137
Iteration 22, loss = 5.33922206
Iteration 23, loss = 5.49088118
Iteration 24, loss = 5.32878812
Iteration 25, loss = 5.39747833
Iteration 26, loss = 5.36677262
Iteration 27, loss = 5.46728666
Iteration 28, loss = 5.26606469
Iteration 29, loss = 5.44168747
Iteration 30, loss = 5.51513867
Iteration 31, loss = 5.42342493
Iteration 32, loss = 5.14013543
Iteration 33, loss = 5.17168793
Iteration 34, loss = 5.69063421
Iteration 35, loss = 5.48485789
Iteration 36, loss = 5.23591390
Iteration 37, loss = 5.52781709
Iteration 38, loss = 5.36242596
Iteration 39, loss = 5.28244342
Iteration 40, loss = 5.09820587
Iteration 41, loss = 5.49421103
Iteration 42, loss = 5.34931025
Iteration 43, loss = 5.52409082
Iteration 44, loss = 5.23053755
Iteration 45, loss = 5.57607705
Iteration 46, loss = 5.27601748
Iteration 47, loss = 5.26125874
Iteration 48, loss = 5.28184528
Iteration 49, loss = 5.26230011
Iteration 50, loss = 5.30324352
Iteration 51, loss = 4.75024516
Iteration 52, loss = 4.53569442
Iteration 53, loss = 3.80440098
Iteration 54, loss = 3.04064424
Iteration 55, loss = 3.20606793
Iteration 56, loss = 2.98741103
Iteration 57, loss = 2.70081030
Iteration 58, loss = 2.94329659
Iteration 59, loss = 2.87185482
Iteration 60, loss = 2.87135327
Iteration 61, loss = 2.62487972
Iteration 62, loss = 2.67036216
Iteration 63, loss = 2.78325915
Iteration 64, loss = 2.97212105
Iteration 65, loss = 2.79879798
Iteration 66, loss = 2.76496668
Iteration 67, loss = 2.84319696
Iteration 68, loss = 2.88455056
Iteration 69, loss = 2.65548717
Iteration 70, loss = 2.65054430
Iteration 71, loss = 2.45235305
Iteration 72, loss = 2.75983037
Iteration 73, loss = 2.38613191
Iteration 74, loss = 2.64403876
Iteration 75, loss = 2.93057607
Iteration 76, loss = 2.74988968
Iteration 77, loss = 2.51941829
Iteration 78, loss = 2.85502770
Iteration 79, loss = 3.01893767
Iteration 80, loss = 2.76257607
Iteration 81, loss = 2.83902399
Iteration 82, loss = 2.57497156
Iteration 83, loss = 2.41035255
Iteration 84, loss = 2.77676366
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 13.68494850
Iteration 2, loss = 11.36596257
Iteration 3, loss = 9.16240821
Iteration 4, loss = 8.14816352
Iteration 5, loss = 6.16266780
Iteration 6, loss = 5.32682150
Iteration 7, loss = 5.85186481
Iteration 8, loss = 5.73751227
Iteration 9, loss = 5.00235246
Iteration 10, loss = 5.19981365
Iteration 11, loss = 5.13053686
Iteration 12, loss = 4.10010953
Iteration 13, loss = 3.95707902
Iteration 14, loss = 3.01935373
Iteration 15, loss = 2.80323705
Iteration 16, loss = 3.28708118
Iteration 17, loss = 3.79190439
Iteration 18, loss = 3.21455446
Iteration 19, loss = 3.09835166
Iteration 20, loss = 3.26482771
Iteration 21, loss = 3.36655535
Iteration 22, loss = 2.69973205
Iteration 23, loss = 3.01991730
Iteration 24, loss = 3.69205455
Iteration 25, loss = 2.94041207
Iteration 26, loss = 3.19263814
Iteration 27, loss = 3.25305870
Iteration 28, loss = 2.88303588
Iteration 29, loss = 2.59506530
Iteration 30, loss = 3.55245048
Iteration 31, loss = 2.97378834
Iteration 32, loss = 3.16302731
Iteration 33, loss = 2.89082709
Iteration 34, loss = 3.18437484
Iteration 35, loss = 3.63911148
Iteration 36, loss = 3.41720347
Iteration 37, loss = 3.24426183
Iteration 38, loss = 2.39825379
Iteration 39, loss = 3.27660043
Iteration 40, loss = 2.30808959
Iteration 41, loss = 3.77227658
Iteration 42, loss = 3.38333853
Iteration 43, loss = 3.27706278
Iteration 44, loss = 2.70000665
Iteration 45, loss = 2.73384089
Iteration 46, loss = 3.09762240
Iteration 47, loss = 3.10319238
Iteration 48, loss = 2.52128582
Iteration 49, loss = 2.14916052
Iteration 50, loss = 2.81188143
Iteration 51, loss = 2.97063054
Iteration 52, loss = 2.31714909
Iteration 53, loss = 2.78543532
Iteration 54, loss = 2.94701458
Iteration 55, loss = 2.51028087
Iteration 56, loss = 2.77455280
Iteration 57, loss = 3.51793342
Iteration 58, loss = 2.40804515
Iteration 59, loss = 3.85921949
Iteration 60, loss = 3.18271639
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.54713557
Iteration 2, loss = 0.33534884
Iteration 3, loss = 0.22762194
Iteration 4, loss = 0.19675855
Iteration 5, loss = 0.18572465
Iteration 6, loss = 0.16594702
Iteration 7, loss = 0.18284423
Iteration 8, loss = 0.16262750
Iteration 9, loss = 0.15806107
Iteration 10, loss = 0.21589945
Iteration 11, loss = 0.22430564
Iteration 12, loss = 0.20533228
Iteration 13, loss = 0.19594095
Iteration 14, loss = 0.19210317
Iteration 15, loss = 0.19459021
Iteration 16, loss = 0.21075492
Iteration 17, loss = 0.19828741
Iteration 18, loss = 0.20115389
Iteration 19, loss = 0.20198774
Iteration 20, loss = 0.20105451
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.56099006
Iteration 2, loss = 0.34960887
Iteration 3, loss = 0.23015276
Iteration 4, loss = 0.19727925
Iteration 5, loss = 0.18099373
Iteration 6, loss = 0.16828079
Iteration 7, loss = 0.16496368
Iteration 8, loss = 0.17352540
Iteration 9, loss = 0.17055551
Iteration 10, loss = 0.17410318
Iteration 11, loss = 0.17244982
Iteration 12, loss = 0.18665282
Iteration 13, loss = 0.17808614
Iteration 14, loss = 0.17946935
Iteration 15, loss = 0.16172196
Iteration 16, loss = 0.16006192
Iteration 17, loss = 0.17289354
Iteration 18, loss = 0.16193456
Iteration 19, loss = 0.17480889
Iteration 20, loss = 0.17714622
Iteration 21, loss = 0.18169583
Iteration 22, loss = 0.16413681
Iteration 23, loss = 0.16791935
Iteration 24, loss = 0.16609989
Iteration 25, loss = 0.16736645
Iteration 26, loss = 0.19225841
Iteration 27, loss = 0.20403227
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.51181011
Iteration 2, loss = 0.29269711
Iteration 3, loss = 0.21218260
Iteration 4, loss = 0.19306112
Iteration 5, loss = 0.18720009
Iteration 6, loss = 0.18746653
Iteration 7, loss = 0.17151666
Iteration 8, loss = 0.16134362
Iteration 9, loss = 0.18660993
Iteration 10, loss = 0.19114009
Iteration 11, loss = 0.19536970
Iteration 12, loss = 0.18813485
Iteration 13, loss = 0.18132603
Iteration 14, loss = 0.18167353
Iteration 15, loss = 0.18294193
Iteration 16, loss = 0.18854829
Iteration 17, loss = 0.18997330
Iteration 18, loss = 0.17698955
Iteration 19, loss = 0.16943078
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.53434555
Iteration 2, loss = 0.28311826
Iteration 3, loss = 0.19785281
Iteration 4, loss = 0.16959642
Iteration 5, loss = 0.15453731
Iteration 6, loss = 0.15976615
Iteration 7, loss = 0.15495693
Iteration 8, loss = 0.15151754
Iteration 9, loss = 0.14001688
Iteration 10, loss = 0.14085421
Iteration 11, loss = 0.14393407
Iteration 12, loss = 0.16984616
Iteration 13, loss = 0.17245780
Iteration 14, loss = 0.16176399
Iteration 15, loss = 0.17825645
Iteration 16, loss = 0.17095862
Iteration 17, loss = 0.16354567
Iteration 18, loss = 0.15886424
Iteration 19, loss = 0.14982977
Iteration 20, loss = 0.15045525
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.51815206
Iteration 2, loss = 0.28930033
Iteration 3, loss = 0.21983598
Iteration 4, loss = 0.19902260
Iteration 5, loss = 0.18030254
Iteration 6, loss = 0.18982811
Iteration 7, loss = 0.17376289
Iteration 8, loss = 0.15990155
Iteration 9, loss = 0.15619884
Iteration 10, loss = 0.20003410
Iteration 11, loss = 0.20246720
Iteration 12, loss = 0.21625445
Iteration 13, loss = 0.20629155
Iteration 14, loss = 0.20396973
Iteration 15, loss = 0.20862932
Iteration 16, loss = 0.21424025
Iteration 17, loss = 0.20874928
Iteration 18, loss = 0.20754765
Iteration 19, loss = 0.21161992
Iteration 20, loss = 0.21058102
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.72468179
Iteration 2, loss = 0.70281254
Iteration 3, loss = 0.68723139
Iteration 4, loss = 0.67630190
Iteration 5, loss = 0.66911614
Iteration 6, loss = 0.66441884
Iteration 7, loss = 0.66766570
Iteration 8, loss = 0.65644583
Iteration 9, loss = 0.65545185
Iteration 10, loss = 0.65556207
Iteration 11, loss = 0.65500976
Iteration 12, loss = 0.65456320
Iteration 13, loss = 0.65417096
Iteration 14, loss = 0.65868239
Iteration 15, loss = 0.65813786
Iteration 16, loss = 0.65793599
Iteration 17, loss = 0.69801270
Iteration 18, loss = 0.73731049
Iteration 19, loss = 0.73290972
Iteration 20, loss = 0.72953554
Iteration 21, loss = 0.72397348
Iteration 22, loss = 0.71916769
Iteration 23, loss = 0.71472408
Iteration 24, loss = 0.71141637
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.71299786
Iteration 2, loss = 0.70621162
Iteration 3, loss = 0.70032295
Iteration 4, loss = 0.69395791
Iteration 5, loss = 0.68978439
Iteration 6, loss = 0.68512814
Iteration 7, loss = 0.68168682
Iteration 8, loss = 0.68410776
Iteration 9, loss = 0.68497282
Iteration 10, loss = 0.68269875
Iteration 11, loss = 0.67520704
Iteration 12, loss = 0.67169739
Iteration 13, loss = 0.66736876
Iteration 14, loss = 0.65783136
Iteration 15, loss = 0.64866972
Iteration 16, loss = 0.64274428
Iteration 17, loss = 0.64170812
Iteration 18, loss = 0.64297891
Iteration 19, loss = 0.63962251
Iteration 20, loss = 0.63633669
Iteration 21, loss = 0.63404163
Iteration 22, loss = 0.63130941
Iteration 23, loss = 0.62892472
Iteration 24, loss = 0.62665017
Iteration 25, loss = 0.62443326
Iteration 26, loss = 0.62235155
Iteration 27, loss = 0.62035522
Iteration 28, loss = 0.61846884
Iteration 29, loss = 0.61665355
Iteration 30, loss = 0.61492918
Iteration 31, loss = 0.61328194
Iteration 32, loss = 0.61172328
Iteration 33, loss = 0.61024415
Iteration 34, loss = 0.60879551
Iteration 35, loss = 0.60745363
Iteration 36, loss = 0.60618790
Iteration 37, loss = 0.60492530
Iteration 38, loss = 0.60374775
Iteration 39, loss = 0.60266173
Iteration 40, loss = 0.60157181
Iteration 41, loss = 0.60055444
Iteration 42, loss = 0.59958003
Iteration 43, loss = 0.59868275
Iteration 44, loss = 0.59778520
Iteration 45, loss = 0.59695642
Iteration 46, loss = 0.59615751
Iteration 47, loss = 0.59542049
Iteration 48, loss = 0.59469426
Iteration 49, loss = 0.59400817
Iteration 50, loss = 0.59334880
Iteration 51, loss = 0.59273786
Iteration 52, loss = 0.59250627
Iteration 53, loss = 0.59234096
Iteration 54, loss = 0.59185924
Iteration 55, loss = 0.59145844
Iteration 56, loss = 0.59102054
Iteration 57, loss = 0.59062087
Iteration 58, loss = 0.59025216
Iteration 59, loss = 0.58993083
Iteration 60, loss = 0.58958436
Iteration 61, loss = 0.58926916
Iteration 62, loss = 0.58900098
Iteration 63, loss = 0.58870771
Iteration 64, loss = 0.58845602
Iteration 65, loss = 0.58818874
Iteration 66, loss = 0.58797712
Iteration 67, loss = 0.58775697
Iteration 68, loss = 0.58751174
Iteration 69, loss = 0.58731287
Iteration 70, loss = 0.58713474
Iteration 71, loss = 0.58695765
Iteration 72, loss = 0.58678496
Iteration 73, loss = 0.58661357
Iteration 74, loss = 0.58646512
Iteration 75, loss = 0.58633405
Iteration 76, loss = 0.58617824
Iteration 77, loss = 0.58606664
Iteration 78, loss = 0.58592725
Iteration 79, loss = 0.58581677
Iteration 80, loss = 0.58570708
Iteration 81, loss = 0.58563078
Iteration 82, loss = 0.58551142
Iteration 83, loss = 0.58541119
Iteration 84, loss = 0.58533178
Iteration 85, loss = 0.58524528
Iteration 86, loss = 0.58520805
Iteration 87, loss = 0.58510734
Iteration 88, loss = 0.58503992
Iteration 89, loss = 0.58497232
Iteration 90, loss = 0.58491591
Iteration 91, loss = 0.58487396
Iteration 92, loss = 0.58480196
Iteration 93, loss = 0.58478681
Iteration 94, loss = 0.58468163
Iteration 95, loss = 0.58461854
Iteration 96, loss = 0.58459520
Iteration 97, loss = 0.58454520
Iteration 98, loss = 0.58451901
Iteration 99, loss = 0.58447257
Iteration 100, loss = 0.58441318
Iteration 101, loss = 0.58156727
Iteration 102, loss = 0.58149014
Iteration 103, loss = 0.58138017
Iteration 104, loss = 0.58128070
Iteration 105, loss = 0.58119876
Iteration 106, loss = 0.58110367
Iteration 107, loss = 0.58104009
Iteration 108, loss = 0.58096079
Iteration 109, loss = 0.58088939
Iteration 110, loss = 0.58084379
Iteration 111, loss = 0.58079062
Iteration 112, loss = 0.58074570
Iteration 113, loss = 0.58072106
Iteration 114, loss = 0.58063887
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.77245445
Iteration 2, loss = 0.74371465
Iteration 3, loss = 0.75809461
Iteration 4, loss = 0.74942458
Iteration 5, loss = 0.74124679
Iteration 6, loss = 0.73342229
Iteration 7, loss = 0.72589690
Iteration 8, loss = 0.71867133
Iteration 9, loss = 0.71167291
Iteration 10, loss = 0.70498322
Iteration 11, loss = 0.69852017
Iteration 12, loss = 0.69233409
Iteration 13, loss = 0.68636668
Iteration 14, loss = 0.68060947
Iteration 15, loss = 0.67496296
Iteration 16, loss = 0.66952482
Iteration 17, loss = 0.66433351
Iteration 18, loss = 0.65933875
Iteration 19, loss = 0.65454261
Iteration 20, loss = 0.64993041
Iteration 21, loss = 0.64550695
Iteration 22, loss = 0.64127416
Iteration 23, loss = 0.63718284
Iteration 24, loss = 0.63325907
Iteration 25, loss = 0.62949316
Iteration 26, loss = 0.62587967
Iteration 27, loss = 0.62210216
Iteration 28, loss = 0.61937820
Iteration 29, loss = 0.61836528
Iteration 30, loss = 0.61795201
Iteration 31, loss = 0.61851927
Iteration 32, loss = 0.61485121
Iteration 33, loss = 0.60998565
Iteration 34, loss = 0.60835264
Iteration 35, loss = 0.60634785
Iteration 36, loss = 0.60432852
Iteration 37, loss = 0.60236442
Iteration 38, loss = 0.60052264
Iteration 39, loss = 0.59872854
Iteration 40, loss = 0.59707436
Iteration 41, loss = 0.59546058
Iteration 42, loss = 0.59394095
Iteration 43, loss = 0.59249362
Iteration 44, loss = 0.59112729
Iteration 45, loss = 0.58982215
Iteration 46, loss = 0.58858405
Iteration 47, loss = 0.58741240
Iteration 48, loss = 0.58630405
Iteration 49, loss = 0.58525904
Iteration 50, loss = 0.58426754
Iteration 51, loss = 0.58334126
Iteration 52, loss = 0.58246389
Iteration 53, loss = 0.58162666
Iteration 54, loss = 0.58085637
Iteration 55, loss = 0.58013428
Iteration 56, loss = 0.57944119
Iteration 57, loss = 0.57879143
Iteration 58, loss = 0.57820062
Iteration 59, loss = 0.57763015
Iteration 60, loss = 0.57710181
Iteration 61, loss = 0.57661329
Iteration 62, loss = 0.57615637
Iteration 63, loss = 0.57573675
Iteration 64, loss = 0.57534589
Iteration 65, loss = 0.57499062
Iteration 66, loss = 0.57462309
Iteration 67, loss = 0.57431654
Iteration 68, loss = 0.57401584
Iteration 69, loss = 0.57374524
Iteration 70, loss = 0.57351872
Iteration 71, loss = 0.57327322
Iteration 72, loss = 0.57307507
Iteration 73, loss = 0.58411052
Iteration 74, loss = 0.57995320
Iteration 75, loss = 0.57350600
Iteration 76, loss = 0.56992083
Iteration 77, loss = 0.56724558
Iteration 78, loss = 0.56504093
Iteration 79, loss = 0.56316663
Iteration 80, loss = 0.56157260
Iteration 81, loss = 0.56016414
Iteration 82, loss = 0.55892925
Iteration 83, loss = 0.55785704
Iteration 84, loss = 0.55691094
Iteration 85, loss = 0.55610202
Iteration 86, loss = 0.55531923
Iteration 87, loss = 0.55465191
Iteration 88, loss = 0.55407210
Iteration 89, loss = 0.55352280
Iteration 90, loss = 0.55303280
Iteration 91, loss = 0.55260114
Iteration 92, loss = 0.55220236
Iteration 93, loss = 0.55184698
Iteration 94, loss = 0.55152668
Iteration 95, loss = 0.55122103
Iteration 96, loss = 0.55096646
Iteration 97, loss = 0.55072096
Iteration 98, loss = 0.55050747
Iteration 99, loss = 0.55030612
Iteration 100, loss = 0.54873625
Iteration 101, loss = 0.54615517
Iteration 102, loss = 0.54574114
Iteration 103, loss = 0.54550521
Iteration 104, loss = 0.54527914
Iteration 105, loss = 0.54509768
Iteration 106, loss = 0.54492115
Iteration 107, loss = 0.54477286
Iteration 108, loss = 0.54463520
Iteration 109, loss = 0.54452195
Iteration 110, loss = 0.54438509
Iteration 111, loss = 0.54430650
Iteration 112, loss = 0.54419753
Iteration 113, loss = 0.54412602
Iteration 114, loss = 0.54405049
Iteration 115, loss = 0.54399932
Iteration 116, loss = 0.54397367
Iteration 117, loss = 0.54388362
Iteration 118, loss = 0.54381251
Iteration 119, loss = 0.54376328
Iteration 120, loss = 0.54371589
Iteration 121, loss = 0.54369099
Iteration 122, loss = 0.54368090
Iteration 123, loss = 0.54361645
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.65119496
Iteration 2, loss = 0.62906436
Iteration 3, loss = 0.61334153
Iteration 4, loss = 0.59939516
Iteration 5, loss = 0.58420807
Iteration 6, loss = 0.57308634
Iteration 7, loss = 0.56329441
Iteration 8, loss = 0.55508028
Iteration 9, loss = 0.55321369
Iteration 10, loss = 0.55211937
Iteration 11, loss = 0.55391753
Iteration 12, loss = 0.55062630
Iteration 13, loss = 0.51711013
Iteration 14, loss = 0.50251608
Iteration 15, loss = 0.52874718
Iteration 16, loss = 0.52034621
Iteration 17, loss = 0.51406599
Iteration 18, loss = 0.50695659
Iteration 19, loss = 0.49982680
Iteration 20, loss = 0.49890056
Iteration 21, loss = 0.50022891
Iteration 22, loss = 0.49762126
Iteration 23, loss = 0.49503952
Iteration 24, loss = 0.49263665
Iteration 25, loss = 0.49038955
Iteration 26, loss = 0.48827706
Iteration 27, loss = 0.48627409
Iteration 28, loss = 0.48437940
Iteration 29, loss = 0.48258271
Iteration 30, loss = 0.48085815
Iteration 31, loss = 0.47920596
Iteration 32, loss = 0.47763072
Iteration 33, loss = 0.47611085
Iteration 34, loss = 0.47341521
Iteration 35, loss = 0.47166740
Iteration 36, loss = 0.47522114
Iteration 37, loss = 0.47401461
Iteration 38, loss = 0.47290113
Iteration 39, loss = 0.47182879
Iteration 40, loss = 0.47078984
Iteration 41, loss = 0.46980699
Iteration 42, loss = 0.46885204
Iteration 43, loss = 0.46791969
Iteration 44, loss = 0.46703186
Iteration 45, loss = 0.46618070
Iteration 46, loss = 0.46535099
Iteration 47, loss = 0.46455870
Iteration 48, loss = 0.46379703
Iteration 49, loss = 0.46305654
Iteration 50, loss = 0.46236094
Iteration 51, loss = 0.46166679
Iteration 52, loss = 0.46101178
Iteration 53, loss = 0.46037782
Iteration 54, loss = 0.45977599
Iteration 55, loss = 0.45918630
Iteration 56, loss = 0.45862897
Iteration 57, loss = 0.45808380
Iteration 58, loss = 0.45755980
Iteration 59, loss = 0.45706101
Iteration 60, loss = 0.45657540
Iteration 61, loss = 0.45611795
Iteration 62, loss = 0.45567507
Iteration 63, loss = 0.45524848
Iteration 64, loss = 0.45484832
Iteration 65, loss = 0.45444199
Iteration 66, loss = 0.45406591
Iteration 67, loss = 0.45370080
Iteration 68, loss = 0.45335392
Iteration 69, loss = 0.45302131
Iteration 70, loss = 0.45270226
Iteration 71, loss = 0.45241530
Iteration 72, loss = 0.45209631
Iteration 73, loss = 0.45182158
Iteration 74, loss = 0.45154929
Iteration 75, loss = 0.45129994
Iteration 76, loss = 0.45104736
Iteration 77, loss = 0.45081587
Iteration 78, loss = 0.45060495
Iteration 79, loss = 0.45038042
Iteration 80, loss = 0.45018863
Iteration 81, loss = 0.45000102
Iteration 82, loss = 0.44979429
Iteration 83, loss = 0.44962236
Iteration 84, loss = 0.44945874
Iteration 85, loss = 0.44930131
Iteration 86, loss = 0.44914347
Iteration 87, loss = 0.44899323
Iteration 88, loss = 0.44885830
Iteration 89, loss = 0.44872148
Iteration 90, loss = 0.44859407
Iteration 91, loss = 0.44847016
Iteration 92, loss = 0.44836129
Iteration 93, loss = 0.44824923
Iteration 94, loss = 0.44814922
Iteration 95, loss = 0.44804928
Iteration 96, loss = 0.44795772
Iteration 97, loss = 0.44786978
Iteration 98, loss = 0.44778073
Iteration 99, loss = 0.44770951
Iteration 100, loss = 0.44763732
Iteration 101, loss = 0.44756511
Iteration 102, loss = 0.44750330
Iteration 103, loss = 0.44743807
Iteration 104, loss = 0.44738077
Iteration 105, loss = 0.44733879
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.76220277
Iteration 2, loss = 0.73348806
Iteration 3, loss = 0.69978338
Iteration 4, loss = 0.69476461
Iteration 5, loss = 0.69093147
Iteration 6, loss = 0.68761707
Iteration 7, loss = 0.68171404
Iteration 8, loss = 0.67689615
Iteration 9, loss = 0.67210701
Iteration 10, loss = 0.66876496
Iteration 11, loss = 0.66575177
Iteration 12, loss = 0.66300362
Iteration 13, loss = 0.66041966
Iteration 14, loss = 0.65795707
Iteration 15, loss = 0.64780223
Iteration 16, loss = 0.63411966
Iteration 17, loss = 0.62588932
Iteration 18, loss = 0.61499447
Iteration 19, loss = 0.57765334
Iteration 20, loss = 0.56482843
Iteration 21, loss = 0.55489472
Iteration 22, loss = 0.54392662
Iteration 23, loss = 0.54501506
Iteration 24, loss = 0.56905367
Iteration 25, loss = 0.58999611
Iteration 26, loss = 0.58323827
Iteration 27, loss = 0.57920872
Iteration 28, loss = 0.57649701
Iteration 29, loss = 0.57353361
Iteration 30, loss = 0.57013598
Iteration 31, loss = 0.56705644
Iteration 32, loss = 0.56449396
Iteration 33, loss = 0.56142781
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 7.31546373
Iteration 2, loss = 3.65893472
Iteration 3, loss = 3.64967115
Iteration 4, loss = 3.85145359
Iteration 5, loss = 3.64384751
Iteration 6, loss = 3.96642207
Iteration 7, loss = 3.57477142
Iteration 8, loss = 3.81903361
Iteration 9, loss = 3.56204446
Iteration 10, loss = 3.59466280
Iteration 11, loss = 3.19032512
Iteration 12, loss = 2.64787796
Iteration 13, loss = 4.15660903
Iteration 14, loss = 3.67057640
Iteration 15, loss = 3.19399916
Iteration 16, loss = 3.64383818
Iteration 17, loss = 3.46501721
Iteration 18, loss = 2.74843687
Iteration 19, loss = 3.77594260
Iteration 20, loss = 3.65244070
Iteration 21, loss = 3.49652141
Iteration 22, loss = 3.77554514
Iteration 23, loss = 3.05875562
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 8.43564183
Iteration 2, loss = 4.74223723
Iteration 3, loss = 3.80128898
Iteration 4, loss = 4.26174892
Iteration 5, loss = 3.94461772
Iteration 6, loss = 4.59207553
Iteration 7, loss = 4.27423503
Iteration 8, loss = 3.68261357
Iteration 9, loss = 3.07183119
Iteration 10, loss = 4.12832169
Iteration 11, loss = 3.34253347
Iteration 12, loss = 4.20840942
Iteration 13, loss = 3.21425424
Iteration 14, loss = 3.31684328
Iteration 15, loss = 3.05675429
Iteration 16, loss = 2.78228390
Iteration 17, loss = 3.42429793
Iteration 18, loss = 2.29186739
Iteration 19, loss = 4.05252521
Iteration 20, loss = 3.87230762
Iteration 21, loss = 2.73103909
Iteration 22, loss = 3.12232866
Iteration 23, loss = 4.16715640
Iteration 24, loss = 2.79962176
Iteration 25, loss = 3.75667609
Iteration 26, loss = 3.59107468
Iteration 27, loss = 2.78045217
Iteration 28, loss = 2.56248727
Iteration 29, loss = 2.57467415
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 5.88541606
Iteration 2, loss = 4.52194661
Iteration 3, loss = 4.49202041
Iteration 4, loss = 3.88489619
Iteration 5, loss = 3.91822257
Iteration 6, loss = 3.82129599
Iteration 7, loss = 6.33678458
Iteration 8, loss = 3.52329563
Iteration 9, loss = 2.92046879
Iteration 10, loss = 4.02952296
Iteration 11, loss = 3.71452953
Iteration 12, loss = 2.78554281
Iteration 13, loss = 3.76685478
Iteration 14, loss = 3.15507778
Iteration 15, loss = 3.53384311
Iteration 16, loss = 3.23104031
Iteration 17, loss = 3.63739505
Iteration 18, loss = 3.54982736
Iteration 19, loss = 2.67855227
Iteration 20, loss = 3.88769845
Iteration 21, loss = 2.78166985
Iteration 22, loss = 3.78332900
Iteration 23, loss = 3.19256024
Iteration 24, loss = 2.54116836
Iteration 25, loss = 3.64930599
Iteration 26, loss = 2.38371663
Iteration 27, loss = 2.14683677
Iteration 28, loss = 3.59634096
Iteration 29, loss = 2.87201062
Iteration 30, loss = 2.50991657
Iteration 31, loss = 2.21905186
Iteration 32, loss = 2.65528905
Iteration 33, loss = 2.39452834
Iteration 34, loss = 3.34062070
Iteration 35, loss = 2.74751322
Iteration 36, loss = 2.48884291
Iteration 37, loss = 4.69389782
Iteration 38, loss = 2.74127232
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 10.14205473
Iteration 2, loss = 4.02503596
Iteration 3, loss = 4.04861537
Iteration 4, loss = 4.05094013
Iteration 5, loss = 4.05126146
Iteration 6, loss = 4.08581624
Iteration 7, loss = 4.37977010
Iteration 8, loss = 3.31224151
Iteration 9, loss = 3.38262488
Iteration 10, loss = 3.19617456
Iteration 11, loss = 3.42223753
Iteration 12, loss = 3.49149359
Iteration 13, loss = 3.47928711
Iteration 14, loss = 2.51172307
Iteration 15, loss = 3.03514563
Iteration 16, loss = 2.81164954
Iteration 17, loss = 2.88409591
Iteration 18, loss = 2.59702113
Iteration 19, loss = 3.21690807
Iteration 20, loss = 3.33853994
Iteration 21, loss = 3.48470192
Iteration 22, loss = 2.84644388
Iteration 23, loss = 3.04749363
Iteration 24, loss = 2.90451972
Iteration 25, loss = 2.58448001
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 8.68981876
Iteration 2, loss = 4.70226314
Iteration 3, loss = 4.24182703
Iteration 4, loss = 3.95128404
Iteration 5, loss = 3.55654832
Iteration 6, loss = 3.86183518
Iteration 7, loss = 3.48245520
Iteration 8, loss = 2.79786585
Iteration 9, loss = 2.84863666
Iteration 10, loss = 3.30039507
Iteration 11, loss = 3.53852303
Iteration 12, loss = 3.25906252
Iteration 13, loss = 3.37336596
Iteration 14, loss = 3.61637582
Iteration 15, loss = 4.07270575
Iteration 16, loss = 3.36294614
Iteration 17, loss = 3.88576343
Iteration 18, loss = 3.56596270
Iteration 19, loss = 3.29459443
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 10.87561108
Iteration 2, loss = 10.28271120
Iteration 3, loss = 9.82473969
Iteration 4, loss = 9.53604584
Iteration 5, loss = 9.24989081
Iteration 6, loss = 8.99977261
Iteration 7, loss = 8.76220019
Iteration 8, loss = 8.48986410
Iteration 9, loss = 8.08602324
Iteration 10, loss = 7.39070986
Iteration 11, loss = 6.45859323
Iteration 12, loss = 4.68600226
Iteration 13, loss = 3.09882476
Iteration 14, loss = 1.56751841
Iteration 15, loss = 1.44859890
Iteration 16, loss = 1.37537802
Iteration 17, loss = 1.31125258
Iteration 18, loss = 1.26997123
Iteration 19, loss = 1.20354862
Iteration 20, loss = 1.16205724
Iteration 21, loss = 1.11203704
Iteration 22, loss = 1.06170701
Iteration 23, loss = 1.02152677
Iteration 24, loss = 0.99130115
Iteration 25, loss = 0.95008293
Iteration 26, loss = 0.91681312
Iteration 27, loss = 0.89442567
Iteration 28, loss = 0.86640968
Iteration 29, loss = 0.83798133
Iteration 30, loss = 0.80472334
Iteration 31, loss = 0.77574658
Iteration 32, loss = 0.75474600
Iteration 33, loss = 0.73432300
Iteration 34, loss = 0.71980083
Iteration 35, loss = 0.70065796
Iteration 36, loss = 0.67610617
Iteration 37, loss = 0.64384496
Iteration 38, loss = 0.61741359
Iteration 39, loss = 0.59645998
Iteration 40, loss = 0.58346190
Iteration 41, loss = 0.57313106
Iteration 42, loss = 0.56164971
Iteration 43, loss = 0.54604474
Iteration 44, loss = 0.53408848
Iteration 45, loss = 0.53022798
Iteration 46, loss = 0.52721892
Iteration 47, loss = 0.52492198
Iteration 48, loss = 0.52259307
Iteration 49, loss = 0.52197122
Iteration 50, loss = 0.52186177
Iteration 51, loss = 0.52186052
Iteration 52, loss = 0.52183663
Iteration 53, loss = 0.52174586
Iteration 54, loss = 0.52203279
Iteration 55, loss = 0.52162320
Iteration 56, loss = 0.52161320
Iteration 57, loss = 0.52176466
Iteration 58, loss = 0.52151500
Iteration 59, loss = 0.52158806
Iteration 60, loss = 0.52157344
Iteration 61, loss = 0.52175715
Iteration 62, loss = 0.52209504
Iteration 63, loss = 0.51808486
Iteration 64, loss = 0.51838319
Iteration 65, loss = 0.51814749
Iteration 66, loss = 0.51836209
Iteration 67, loss = 0.51818568
Iteration 68, loss = 0.51817322
Iteration 69, loss = 0.51821786
Iteration 70, loss = 0.51823796
Iteration 71, loss = 0.51848230
Iteration 72, loss = 0.51810027
Iteration 73, loss = 0.51860483
Iteration 74, loss = 0.51833708
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 13.91717767
Iteration 2, loss = 15.57916898
Iteration 3, loss = 12.82357030
Iteration 4, loss = 11.90085393
Iteration 5, loss = 11.42902702
Iteration 6, loss = 11.09002499
Iteration 7, loss = 10.79509786
Iteration 8, loss = 10.66257522
Iteration 9, loss = 10.54507435
Iteration 10, loss = 10.34900386
Iteration 11, loss = 10.26714550
Iteration 12, loss = 7.68888671
Iteration 13, loss = 6.25736858
Iteration 14, loss = 6.06331617
Iteration 15, loss = 5.87452444
Iteration 16, loss = 5.77362963
Iteration 17, loss = 5.62625107
Iteration 18, loss = 5.52932365
Iteration 19, loss = 5.41966756
Iteration 20, loss = 5.37218236
Iteration 21, loss = 5.34555939
Iteration 22, loss = 5.35629667
Iteration 23, loss = 5.31923875
Iteration 24, loss = 5.28633360
Iteration 25, loss = 5.28106248
Iteration 26, loss = 5.28550450
Iteration 27, loss = 5.29702653
Iteration 28, loss = 5.29325059
Iteration 29, loss = 5.24006576
Iteration 30, loss = 5.24275145
Iteration 31, loss = 5.26553875
Iteration 32, loss = 5.25749208
Iteration 33, loss = 5.27779042
Iteration 34, loss = 5.26313747
Iteration 35, loss = 5.28356659
Iteration 36, loss = 5.24625381
Iteration 37, loss = 5.21142419
Iteration 38, loss = 5.33280190
Iteration 39, loss = 5.26925526
Iteration 40, loss = 5.28012624
Iteration 41, loss = 5.26237916
Iteration 42, loss = 5.29867451
Iteration 43, loss = 5.28665051
Iteration 44, loss = 5.28739769
Iteration 45, loss = 5.25949464
Iteration 46, loss = 5.36174268
Iteration 47, loss = 5.23131918
Iteration 48, loss = 5.33124373
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 7.27854860
Iteration 2, loss = 6.09305531
Iteration 3, loss = 5.65555783
Iteration 4, loss = 6.03040340
Iteration 5, loss = 6.63658559
Iteration 6, loss = 7.10952324
Iteration 7, loss = 7.15054765
Iteration 8, loss = 7.29493562
Iteration 9, loss = 7.03602662
Iteration 10, loss = 7.05074230
Iteration 11, loss = 7.00738654
Iteration 12, loss = 6.90998065
Iteration 13, loss = 6.57373295
Iteration 14, loss = 6.43324947
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 16.05010730
Iteration 2, loss = 12.84206324
Iteration 3, loss = 8.36594946
Iteration 4, loss = 6.44308339
Iteration 5, loss = 6.12728117
Iteration 6, loss = 5.36975751
Iteration 7, loss = 4.97808397
Iteration 8, loss = 4.57936433
Iteration 9, loss = 4.59938161
Iteration 10, loss = 5.40634250
Iteration 11, loss = 4.83311418
Iteration 12, loss = 4.72972564
Iteration 13, loss = 5.06563881
Iteration 14, loss = 5.26962537
Iteration 15, loss = 4.08214012
Iteration 16, loss = 4.04312324
Iteration 17, loss = 4.65134768
Iteration 18, loss = 4.61370627
Iteration 19, loss = 4.18629673
Iteration 20, loss = 5.00675642
Iteration 21, loss = 4.57263375
Iteration 22, loss = 3.73943137
Iteration 23, loss = 3.79637098
Iteration 24, loss = 3.82875867
Iteration 25, loss = 3.19292608
Iteration 26, loss = 4.09709106
Iteration 27, loss = 4.53123981
Iteration 28, loss = 3.47479970
Iteration 29, loss = 3.77775368
Iteration 30, loss = 3.18903231
Iteration 31, loss = 3.04059596
Iteration 32, loss = 2.89517522
Iteration 33, loss = 2.82915476
Iteration 34, loss = 2.84089439
Iteration 35, loss = 2.47490941
Iteration 36, loss = 3.46060958
Iteration 37, loss = 3.29559555
Iteration 38, loss = 2.61780656
Iteration 39, loss = 2.27002378
Iteration 40, loss = 2.02345225
Iteration 41, loss = 2.81067196
Iteration 42, loss = 3.01315550
Iteration 43, loss = 2.29636846
Iteration 44, loss = 2.67497257
Iteration 45, loss = 2.02909942
Iteration 46, loss = 2.41857114
Iteration 47, loss = 2.20851091
Iteration 48, loss = 2.01807428
Iteration 49, loss = 2.19583465
Iteration 50, loss = 2.03903068
Iteration 51, loss = 2.69650003
Iteration 52, loss = 1.75682362
Iteration 53, loss = 1.75682616
Iteration 54, loss = 1.97497467
Iteration 55, loss = 2.27794888
Iteration 56, loss = 1.79877023
Iteration 57, loss = 2.44497263
Iteration 58, loss = 1.90376765
Iteration 59, loss = 2.33322730
Iteration 60, loss = 1.89185486
Iteration 61, loss = 2.43979016
Iteration 62, loss = 2.48348329
Iteration 63, loss = 2.46476217
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 1.52260397
Iteration 2, loss = 0.95392806
Iteration 3, loss = 0.68012879
Iteration 4, loss = 0.65072732
Iteration 5, loss = 0.64685278
Iteration 6, loss = 0.64334273
Iteration 7, loss = 0.64187120
Iteration 8, loss = 0.63947375
Iteration 9, loss = 0.63919773
Iteration 10, loss = 0.63828421
Iteration 11, loss = 0.63855557
Iteration 12, loss = 0.63968186
Iteration 13, loss = 0.63936155
Iteration 14, loss = 0.64510456
Iteration 15, loss = 0.65467854
Iteration 16, loss = 0.66348763
Iteration 17, loss = 0.66546299
Iteration 18, loss = 0.67135789
Iteration 19, loss = 0.67363552
Iteration 20, loss = 0.67635024
Iteration 21, loss = 0.67659726
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.29607401
Iteration 2, loss = 0.17177624
Iteration 3, loss = 0.15480527
Iteration 4, loss = 0.14895098
Iteration 5, loss = 0.16115221
Iteration 6, loss = 0.15003559
Iteration 7, loss = 0.16158818
Iteration 8, loss = 0.16206344
Iteration 9, loss = 0.15692437
Iteration 10, loss = 0.14640787
Iteration 11, loss = 0.14508673
Iteration 12, loss = 0.14715010
Iteration 13, loss = 0.13136298
Iteration 14, loss = 0.15514100
Iteration 15, loss = 0.15323105
Iteration 16, loss = 0.14701502
Iteration 17, loss = 0.15176551
Iteration 18, loss = 0.14310027
Iteration 19, loss = 0.14952186
Iteration 20, loss = 0.15010977
Iteration 21, loss = 0.14432888
Iteration 22, loss = 0.13977203
Iteration 23, loss = 0.14210940
Iteration 24, loss = 0.14092657
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.28406203
Iteration 2, loss = 0.20161948
Iteration 3, loss = 0.17287922
Iteration 4, loss = 0.13731164
Iteration 5, loss = 0.14927391
Iteration 6, loss = 0.15060586
Iteration 7, loss = 0.14186401
Iteration 8, loss = 0.15624616
Iteration 9, loss = 0.16421928
Iteration 10, loss = 0.16054712
Iteration 11, loss = 0.15406964
Iteration 12, loss = 0.15048509
Iteration 13, loss = 0.15345559
Iteration 14, loss = 0.14872573
Iteration 15, loss = 0.15092168
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.31164335
Iteration 2, loss = 0.17443385
Iteration 3, loss = 0.16450412
Iteration 4, loss = 0.14942345
Iteration 5, loss = 0.14236320
Iteration 6, loss = 0.14491935
Iteration 7, loss = 0.13645869
Iteration 8, loss = 0.14087323
Iteration 9, loss = 0.14606722
Iteration 10, loss = 0.14270021
Iteration 11, loss = 0.13609795
Iteration 12, loss = 0.14221286
Iteration 13, loss = 0.14054645
Iteration 14, loss = 0.12854579
Iteration 15, loss = 0.11853227
Iteration 16, loss = 0.12326150
Iteration 17, loss = 0.11631437
Iteration 18, loss = 0.11459085
Iteration 19, loss = 0.11046293
Iteration 20, loss = 0.11297343
Iteration 21, loss = 0.11923530
Iteration 22, loss = 0.12088804
Iteration 23, loss = 0.11950507
Iteration 24, loss = 0.11742252
Iteration 25, loss = 0.11574796
Iteration 26, loss = 0.11664659
Iteration 27, loss = 0.11877304
Iteration 28, loss = 0.12380958
Iteration 29, loss = 0.11418257
Iteration 30, loss = 0.11386930
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.32382463
Iteration 2, loss = 0.20075349
Iteration 3, loss = 0.17291729
Iteration 4, loss = 0.16053575
Iteration 5, loss = 0.15024691
Iteration 6, loss = 0.15133000
Iteration 7, loss = 0.14412690
Iteration 8, loss = 0.15863078
Iteration 9, loss = 0.14859359
Iteration 10, loss = 0.15196544
Iteration 11, loss = 0.14685324
Iteration 12, loss = 0.14433633
Iteration 13, loss = 0.13711394
Iteration 14, loss = 0.13143495
Iteration 15, loss = 0.15213819
Iteration 16, loss = 0.15807886
Iteration 17, loss = 0.15275932
Iteration 18, loss = 0.16443267
Iteration 19, loss = 0.12108575
Iteration 20, loss = 0.11602290
Iteration 21, loss = 0.14985194
Iteration 22, loss = 0.14099372
Iteration 23, loss = 0.13592075
Iteration 24, loss = 0.13478871
Iteration 25, loss = 0.14012268
Iteration 26, loss = 0.13240504
Iteration 27, loss = 0.13348946
Iteration 28, loss = 0.13548740
Iteration 29, loss = 0.12782761
Iteration 30, loss = 0.12721422
Iteration 31, loss = 0.12536775
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.26194055
Iteration 2, loss = 0.16057786
Iteration 3, loss = 0.14442094
Iteration 4, loss = 0.13286207
Iteration 5, loss = 0.11983433
Iteration 6, loss = 0.12182696
Iteration 7, loss = 0.12886150
Iteration 8, loss = 0.13547602
Iteration 9, loss = 0.12761449
Iteration 10, loss = 0.13957471
Iteration 11, loss = 0.12822641
Iteration 12, loss = 0.12508765
Iteration 13, loss = 0.12007573
Iteration 14, loss = 0.12965807
Iteration 15, loss = 0.14834230
Iteration 16, loss = 0.13693752
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 8.26914367
Iteration 2, loss = 3.42025745
Iteration 3, loss = 4.84004561
Iteration 4, loss = 4.24139704
Iteration 5, loss = 4.67786080
Iteration 6, loss = 3.64673476
Iteration 7, loss = 3.45250896
Iteration 8, loss = 3.73375305
Iteration 9, loss = 3.85225758
Iteration 10, loss = 3.84646681
Iteration 11, loss = 3.18899358
Iteration 12, loss = 2.91994664
Iteration 13, loss = 4.10896801
Iteration 14, loss = 3.48768921
Iteration 15, loss = 3.58584113
Iteration 16, loss = 3.34394737
Iteration 17, loss = 3.57942247
Iteration 18, loss = 3.62711596
Iteration 19, loss = 3.35102414
Iteration 20, loss = 2.91872374
Iteration 21, loss = 2.52027560
Iteration 22, loss = 3.42413551
Iteration 23, loss = 3.69685864
Iteration 24, loss = 3.04937943
Iteration 25, loss = 3.13125143
Iteration 26, loss = 2.83292557
Iteration 27, loss = 3.11807692
Iteration 28, loss = 3.45703102
Iteration 29, loss = 3.54752245
Iteration 30, loss = 3.29570011
Iteration 31, loss = 2.51586686
Iteration 32, loss = 4.29626391
Iteration 33, loss = 3.50722386
Iteration 34, loss = 3.13593457
Iteration 35, loss = 3.91918953
Iteration 36, loss = 2.78404431
Iteration 37, loss = 3.19061138
Iteration 38, loss = 3.45012190
Iteration 39, loss = 2.14265880
Iteration 40, loss = 2.43644485
Iteration 41, loss = 2.20091665
Iteration 42, loss = 2.61644511
Iteration 43, loss = 3.31299100
Iteration 44, loss = 2.84632131
Iteration 45, loss = 3.40734098
Iteration 46, loss = 2.71777975
Iteration 47, loss = 2.68287524
Iteration 48, loss = 3.08902049
Iteration 49, loss = 3.06158976
Iteration 50, loss = 2.65491831
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 6.51352438
Iteration 2, loss = 3.64137869
Iteration 3, loss = 4.07089809
Iteration 4, loss = 3.76101227
Iteration 5, loss = 3.85578557
Iteration 6, loss = 4.64683609
Iteration 7, loss = 3.29038881
Iteration 8, loss = 3.45725519
Iteration 9, loss = 3.88361586
Iteration 10, loss = 3.22036345
Iteration 11, loss = 2.77852830
Iteration 12, loss = 3.19408720
Iteration 13, loss = 3.88727736
Iteration 14, loss = 3.17586031
Iteration 15, loss = 2.60221010
Iteration 16, loss = 2.63266497
Iteration 17, loss = 3.31585984
Iteration 18, loss = 2.59392219
Iteration 19, loss = 3.17491046
Iteration 20, loss = 3.36476999
Iteration 21, loss = 4.65637461
Iteration 22, loss = 3.12309541
Iteration 23, loss = 3.13871073
Iteration 24, loss = 2.65222107
Iteration 25, loss = 3.83040799
Iteration 26, loss = 3.47560168
Iteration 27, loss = 3.04224499
Iteration 28, loss = 3.78749916
Iteration 29, loss = 2.64393953
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 7.01722042
Iteration 2, loss = 3.79649469
Iteration 3, loss = 4.19141345
Iteration 4, loss = 4.60326010
Iteration 5, loss = 3.68856961
Iteration 6, loss = 3.90296620
Iteration 7, loss = 3.59148752
Iteration 8, loss = 3.23510361
Iteration 9, loss = 2.92128542
Iteration 10, loss = 3.04465107
Iteration 11, loss = 3.32672846
Iteration 12, loss = 4.71890198
Iteration 13, loss = 3.85047254
Iteration 14, loss = 3.42483192
Iteration 15, loss = 3.34910374
Iteration 16, loss = 4.12085079
Iteration 17, loss = 4.26992265
Iteration 18, loss = 2.98253356
Iteration 19, loss = 2.71994126
Iteration 20, loss = 2.37640042
Iteration 21, loss = 2.32671009
Iteration 22, loss = 3.01966408
Iteration 23, loss = 3.49218795
Iteration 24, loss = 3.44905556
Iteration 25, loss = 3.82228130
Iteration 26, loss = 3.82142101
Iteration 27, loss = 3.00699848
Iteration 28, loss = 3.50845337
Iteration 29, loss = 3.55816229
Iteration 30, loss = 2.56459686
Iteration 31, loss = 2.95239198
Iteration 32, loss = 3.00996442
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 7.86848927
Iteration 2, loss = 4.20625142
Iteration 3, loss = 3.88910753
Iteration 4, loss = 3.12549822
Iteration 5, loss = 3.70628644
Iteration 6, loss = 3.26672875
Iteration 7, loss = 3.98162131
Iteration 8, loss = 3.39840634
Iteration 9, loss = 3.37971928
Iteration 10, loss = 3.45491412
Iteration 11, loss = 3.91203153
Iteration 12, loss = 3.69548556
Iteration 13, loss = 5.03680593
Iteration 14, loss = 2.74195300
Iteration 15, loss = 3.32839412
Iteration 16, loss = 3.98941968
Iteration 17, loss = 4.20011351
Iteration 18, loss = 3.46937848
Iteration 19, loss = 3.81897872
Iteration 20, loss = 3.53769669
Iteration 21, loss = 2.53084997
Iteration 22, loss = 2.37117077
Iteration 23, loss = 2.88771590
Iteration 24, loss = 2.79882239
Iteration 25, loss = 3.14153296
Iteration 26, loss = 3.23225671
Iteration 27, loss = 2.90722110
Iteration 28, loss = 3.59553957
Iteration 29, loss = 3.36688876
Iteration 30, loss = 2.53731203
Iteration 31, loss = 2.71278451
Iteration 32, loss = 3.24691302
Iteration 33, loss = 2.49148620
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 8.13692570
Iteration 2, loss = 3.93753193
Iteration 3, loss = 4.35476199
Iteration 4, loss = 4.98421137
Iteration 5, loss = 4.53396321
Iteration 6, loss = 3.94507759
Iteration 7, loss = 3.93649531
Iteration 8, loss = 3.28030905
Iteration 9, loss = 3.84091262
Iteration 10, loss = 3.23022894
Iteration 11, loss = 3.35388303
Iteration 12, loss = 2.67426587
Iteration 13, loss = 3.00378634
Iteration 14, loss = 6.10182337
Iteration 15, loss = 3.73394604
Iteration 16, loss = 3.93294843
Iteration 17, loss = 3.12068570
Iteration 18, loss = 2.56510731
Iteration 19, loss = 3.13296961
Iteration 20, loss = 3.87597138
Iteration 21, loss = 2.78960252
Iteration 22, loss = 2.88918878
Iteration 23, loss = 3.14054816
Iteration 24, loss = 3.54199659
Iteration 25, loss = 2.72490442
Iteration 26, loss = 3.83037947
Iteration 27, loss = 3.03888238
Iteration 28, loss = 2.92661258
Iteration 29, loss = 2.47775095
Iteration 30, loss = 4.25498538
Iteration 31, loss = 3.33144185
Iteration 32, loss = 3.24072689
Iteration 33, loss = 2.61194618
Iteration 34, loss = 2.66899879
Iteration 35, loss = 2.45196223
Iteration 36, loss = 2.06759330
Iteration 37, loss = 2.53105829
Iteration 38, loss = 2.15986549
Iteration 39, loss = 2.67651766
Iteration 40, loss = 3.26041677
Iteration 41, loss = 2.65621043
Iteration 42, loss = 3.26521100
Iteration 43, loss = 3.60174860
Iteration 44, loss = 2.42825226
Iteration 45, loss = 3.97487040
Iteration 46, loss = 2.38557494
Iteration 47, loss = 3.23490787
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 8.38181038
Iteration 2, loss = 4.56583213
Iteration 3, loss = 3.92424487
Iteration 4, loss = 4.49898827
Iteration 5, loss = 3.28696684
Iteration 6, loss = 3.65804842
Iteration 7, loss = 3.15060657
Iteration 8, loss = 4.11431897
Iteration 9, loss = 3.09604585
Iteration 10, loss = 3.91264221
Iteration 11, loss = 3.24797765
Iteration 12, loss = 3.99305064
Iteration 13, loss = 3.75442910
Iteration 14, loss = 4.18823784
Iteration 15, loss = 3.32760140
Iteration 16, loss = 3.32768917
Iteration 17, loss = 3.91544478
Iteration 18, loss = 3.41762507
Iteration 19, loss = 3.64958195
Iteration 20, loss = 3.78742469
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 5.93637010
Iteration 2, loss = 4.88649192
Iteration 3, loss = 3.53520674
Iteration 4, loss = 4.09816261
Iteration 5, loss = 3.79292577
Iteration 6, loss = 4.07382956
Iteration 7, loss = 3.18214794
Iteration 8, loss = 3.47019775
Iteration 9, loss = 3.01065207
Iteration 10, loss = 3.06953750
Iteration 11, loss = 3.14138277
Iteration 12, loss = 2.99954099
Iteration 13, loss = 2.51116358
Iteration 14, loss = 3.60699315
Iteration 15, loss = 3.91833084
Iteration 16, loss = 2.88337541
Iteration 17, loss = 3.39242843
Iteration 18, loss = 3.29573992
Iteration 19, loss = 3.02705433
Iteration 20, loss = 3.80761467
Iteration 21, loss = 3.32091027
Iteration 22, loss = 3.92053321
Iteration 23, loss = 3.31801015
Iteration 24, loss = 2.45339459
Iteration 25, loss = 2.97215480
Iteration 26, loss = 2.69987584
Iteration 27, loss = 2.01277598
Iteration 28, loss = 3.29001297
Iteration 29, loss = 3.44459089
Iteration 30, loss = 3.69717228
Iteration 31, loss = 3.46390509
Iteration 32, loss = 3.43157392
Iteration 33, loss = 2.18519400
Iteration 34, loss = 2.96131060
Iteration 35, loss = 2.43645578
Iteration 36, loss = 2.91574090
Iteration 37, loss = 2.78618090
Iteration 38, loss = 2.29595623
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 5.99783928
Iteration 2, loss = 3.84774354
Iteration 3, loss = 3.74105425
Iteration 4, loss = 3.92737032
Iteration 5, loss = 3.45925058
Iteration 6, loss = 3.32470025
Iteration 7, loss = 3.02471573
Iteration 8, loss = 4.42569082
Iteration 9, loss = 3.32458395
Iteration 10, loss = 3.35572080
Iteration 11, loss = 3.05632447
Iteration 12, loss = 2.94538708
Iteration 13, loss = 3.16580142
Iteration 14, loss = 3.77271152
Iteration 15, loss = 3.86014112
Iteration 16, loss = 3.59800333
Iteration 17, loss = 2.77942431
Iteration 18, loss = 2.79751102
Iteration 19, loss = 3.90389386
Iteration 20, loss = 3.19257579
Iteration 21, loss = 3.12798713
Iteration 22, loss = 3.02890833
Iteration 23, loss = 2.52775546
Iteration 24, loss = 3.55553027
Iteration 25, loss = 3.62785300
Iteration 26, loss = 3.38288401
Iteration 27, loss = 3.36533654
Iteration 28, loss = 2.59785878
Iteration 29, loss = 3.26383653
Iteration 30, loss = 2.94366517
Iteration 31, loss = 2.57362706
Iteration 32, loss = 2.50124525
Iteration 33, loss = 2.14338305
Iteration 34, loss = 2.98521153
Iteration 35, loss = 2.86253537
Iteration 36, loss = 3.29347412
Iteration 37, loss = 2.74093829
Iteration 38, loss = 2.22073226
Iteration 39, loss = 2.17105869
Iteration 40, loss = 2.36299495
Iteration 41, loss = 2.86971136
Iteration 42, loss = 2.59031474
Iteration 43, loss = 2.14376376
Iteration 44, loss = 2.48315545
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 6.59123681
Iteration 2, loss = 3.95938990
Iteration 3, loss = 3.62528427
Iteration 4, loss = 3.46805504
Iteration 5, loss = 3.35101980
Iteration 6, loss = 4.16131322
Iteration 7, loss = 3.22318798
Iteration 8, loss = 2.96980922
Iteration 9, loss = 2.72191190
Iteration 10, loss = 2.99256158
Iteration 11, loss = 3.14739446
Iteration 12, loss = 3.25143943
Iteration 13, loss = 3.00896403
Iteration 14, loss = 3.14831286
Iteration 15, loss = 3.51900481
Iteration 16, loss = 2.95729054
Iteration 17, loss = 3.23793509
Iteration 18, loss = 3.29826465
Iteration 19, loss = 3.37519904
Iteration 20, loss = 2.90938365
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 6.23867899
Iteration 2, loss = 4.08235442
Iteration 3, loss = 3.84053335
Iteration 4, loss = 3.96598389
Iteration 5, loss = 3.96378589
Iteration 6, loss = 3.11650219
Iteration 7, loss = 4.30207456
Iteration 8, loss = 4.36977494
Iteration 9, loss = 3.77623356
Iteration 10, loss = 3.92822980
Iteration 11, loss = 3.87162839
Iteration 12, loss = 3.67709336
Iteration 13, loss = 3.27098605
Iteration 14, loss = 3.58747267
Iteration 15, loss = 3.50731390
Iteration 16, loss = 3.53546643
Iteration 17, loss = 2.64692010
Iteration 18, loss = 2.72879477
Iteration 19, loss = 2.22241849
Iteration 20, loss = 2.88447079
Iteration 21, loss = 3.23562207
Iteration 22, loss = 3.24696327
Iteration 23, loss = 4.04768463
Iteration 24, loss = 3.50083424
Iteration 25, loss = 2.91457771
Iteration 26, loss = 3.20055442
Iteration 27, loss = 3.84582792
Iteration 28, loss = 2.50305489
Iteration 29, loss = 2.36285496
Iteration 30, loss = 3.02329213
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 8.64973648
Iteration 2, loss = 8.63390994
Iteration 3, loss = 8.36820469
Iteration 4, loss = 7.28548105
Iteration 5, loss = 5.34320967
Iteration 6, loss = 4.67565284
Iteration 7, loss = 4.54057498
Iteration 8, loss = 4.48863228
Iteration 9, loss = 4.67432038
Iteration 10, loss = 4.67629734
Iteration 11, loss = 4.60086830
Iteration 12, loss = 4.56051870
Iteration 13, loss = 4.16992449
Iteration 14, loss = 2.25170030
Iteration 15, loss = 2.32125944
Iteration 16, loss = 1.82110958
Iteration 17, loss = 1.81955996
Iteration 18, loss = 1.37142659
Iteration 19, loss = 1.70945021
Iteration 20, loss = 1.33090961
Iteration 21, loss = 1.44543128
Iteration 22, loss = 1.39938189
Iteration 23, loss = 1.43726345
Iteration 24, loss = 1.73081086
Iteration 25, loss = 1.28253804
Iteration 26, loss = 1.24544044
Iteration 27, loss = 1.14889735
Iteration 28, loss = 1.52980753
Iteration 29, loss = 1.42406166
Iteration 30, loss = 1.17987013
Iteration 31, loss = 1.58293837
Iteration 32, loss = 1.61433289
Iteration 33, loss = 1.42930756
Iteration 34, loss = 1.43327466
Iteration 35, loss = 1.44830403
Iteration 36, loss = 1.49694174
Iteration 37, loss = 1.43981769
Iteration 38, loss = 1.43448984
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 9.31795175
Iteration 2, loss = 9.16439311
Iteration 3, loss = 5.54013408
Iteration 4, loss = 2.27219442
Iteration 5, loss = 2.14792850
Iteration 6, loss = 1.98499859
Iteration 7, loss = 1.77609551
Iteration 8, loss = 1.52938286
Iteration 9, loss = 1.22755848
Iteration 10, loss = 0.89471753
Iteration 11, loss = 0.70760374
Iteration 12, loss = 0.69200450
Iteration 13, loss = 0.68253691
Iteration 14, loss = 0.67412405
Iteration 15, loss = 0.66573623
Iteration 16, loss = 0.65846859
Iteration 17, loss = 0.65121853
Iteration 18, loss = 0.64446652
Iteration 19, loss = 0.63762472
Iteration 20, loss = 0.63198320
Iteration 21, loss = 0.62629967
Iteration 22, loss = 0.62164499
Iteration 23, loss = 0.61604286
Iteration 24, loss = 0.61169805
Iteration 25, loss = 0.60787302
Iteration 26, loss = 0.60499178
Iteration 27, loss = 0.59998219
Iteration 28, loss = 0.59754385
Iteration 29, loss = 0.59374126
Iteration 30, loss = 0.59124442
Iteration 31, loss = 0.58861452
Iteration 32, loss = 0.58662333
Iteration 33, loss = 0.58410912
Iteration 34, loss = 0.58238563
Iteration 35, loss = 0.58048407
Iteration 36, loss = 0.57880907
Iteration 37, loss = 0.57790551
Iteration 38, loss = 0.57605003
Iteration 39, loss = 0.57479341
Iteration 40, loss = 0.57594251
Iteration 41, loss = 0.57337706
Iteration 42, loss = 0.57247152
Iteration 43, loss = 0.57343880
Iteration 44, loss = 0.57099797
Iteration 45, loss = 0.57094504
Iteration 46, loss = 0.57004664
Iteration 47, loss = 0.57044211
Iteration 48, loss = 0.56969801
Iteration 49, loss = 0.56955553
Iteration 50, loss = 0.56892277
Iteration 51, loss = 0.56970717
Iteration 52, loss = 0.56861456
Iteration 53, loss = 0.56897644
Iteration 54, loss = 0.56925613
Iteration 55, loss = 0.56817751
Iteration 56, loss = 0.56825573
Iteration 57, loss = 0.56876624
Iteration 58, loss = 0.56972533
Iteration 59, loss = 0.57188784
Iteration 60, loss = 0.56839088
Iteration 61, loss = 0.56919409
Iteration 62, loss = 0.56776711
Iteration 63, loss = 0.56836212
Iteration 64, loss = 0.56829986
Iteration 65, loss = 0.56679403
Iteration 66, loss = 0.56833061
Iteration 67, loss = 0.56754074
Iteration 68, loss = 0.56855235
Iteration 69, loss = 0.56830758
Iteration 70, loss = 0.57172923
Iteration 71, loss = 0.57399668
Iteration 72, loss = 0.57069491
Iteration 73, loss = 0.57306177
Iteration 74, loss = 0.57168157
Iteration 75, loss = 0.56912991
Iteration 76, loss = 0.56778044
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 10.04155690
Iteration 2, loss = 5.97964378
Iteration 3, loss = 0.87593490
Iteration 4, loss = 0.85220630
Iteration 5, loss = 0.83857737
Iteration 6, loss = 0.82407621
Iteration 7, loss = 0.81006198
Iteration 8, loss = 0.79965639
Iteration 9, loss = 0.77976708
Iteration 10, loss = 0.76357216
Iteration 11, loss = 0.75424671
Iteration 12, loss = 0.74531300
Iteration 13, loss = 0.73311099
Iteration 14, loss = 0.72213500
Iteration 15, loss = 0.71132493
Iteration 16, loss = 0.70451376
Iteration 17, loss = 0.69086763
Iteration 18, loss = 0.68111862
Iteration 19, loss = 0.67511836
Iteration 20, loss = 0.66977990
Iteration 21, loss = 0.65786342
Iteration 22, loss = 0.65308930
Iteration 23, loss = 0.64850256
Iteration 24, loss = 0.64089860
Iteration 25, loss = 0.63394378
Iteration 26, loss = 0.63045182
Iteration 27, loss = 0.62742351
Iteration 28, loss = 0.62455789
Iteration 29, loss = 0.61878097
Iteration 30, loss = 0.61632100
Iteration 31, loss = 0.61448389
Iteration 32, loss = 0.61264599
Iteration 33, loss = 0.61120311
Iteration 34, loss = 0.60967806
Iteration 35, loss = 0.60825038
Iteration 36, loss = 0.60724843
Iteration 37, loss = 0.60663454
Iteration 38, loss = 0.60295372
Iteration 39, loss = 0.60285892
Iteration 40, loss = 0.60256017
Iteration 41, loss = 0.60255702
Iteration 42, loss = 0.60283026
Iteration 43, loss = 0.60306477
Iteration 44, loss = 0.60302264
Iteration 45, loss = 0.60371124
Iteration 46, loss = 0.60431763
Iteration 47, loss = 0.60466827
Iteration 48, loss = 0.60502250
Iteration 49, loss = 0.60587141
Iteration 50, loss = 0.60650546
Iteration 51, loss = 0.60352528
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 14.72144868
Iteration 2, loss = 13.27252718
Iteration 3, loss = 12.07602535
Iteration 4, loss = 10.63051890
Iteration 5, loss = 9.92424833
Iteration 6, loss = 9.37880263
Iteration 7, loss = 8.97426975
Iteration 8, loss = 8.48833827
Iteration 9, loss = 7.29390208
Iteration 10, loss = 5.10242126
Iteration 11, loss = 4.93589839
Iteration 12, loss = 5.09694095
Iteration 13, loss = 4.81774446
Iteration 14, loss = 4.61989565
Iteration 15, loss = 5.04981228
Iteration 16, loss = 4.50811844
Iteration 17, loss = 4.93028002
Iteration 18, loss = 4.42418264
Iteration 19, loss = 4.53419745
Iteration 20, loss = 4.52567585
Iteration 21, loss = 5.12354934
Iteration 22, loss = 4.55235132
Iteration 23, loss = 4.48298659
Iteration 24, loss = 4.42203892
Iteration 25, loss = 4.02102473
Iteration 26, loss = 4.43236449
Iteration 27, loss = 4.75363073
Iteration 28, loss = 4.19879270
Iteration 29, loss = 4.55366872
Iteration 30, loss = 4.18563232
Iteration 31, loss = 3.96913244
Iteration 32, loss = 4.11983879
Iteration 33, loss = 4.28834343
Iteration 34, loss = 4.18613139
Iteration 35, loss = 4.04541073
Iteration 36, loss = 4.39316250
Iteration 37, loss = 4.53994991
Iteration 38, loss = 4.46701932
Iteration 39, loss = 3.73349880
Iteration 40, loss = 4.25029488
Iteration 41, loss = 3.60904767
Iteration 42, loss = 3.77857543
Iteration 43, loss = 4.32038520
Iteration 44, loss = 3.75442169
Iteration 45, loss = 3.85561742
Iteration 46, loss = 3.52728635
Iteration 47, loss = 4.23893583
Iteration 48, loss = 3.72074292
Iteration 49, loss = 3.94764596
Iteration 50, loss = 4.00328555
Iteration 51, loss = 3.62666000
Iteration 52, loss = 3.83315948
Iteration 53, loss = 3.34924827
Iteration 54, loss = 3.64271940
Iteration 55, loss = 3.70304859
Iteration 56, loss = 3.55941800
Iteration 57, loss = 3.62238984
Iteration 58, loss = 3.59258489
Iteration 59, loss = 3.20572678
Iteration 60, loss = 3.06830829
Iteration 61, loss = 3.81962588
Iteration 62, loss = 3.80837289
Iteration 63, loss = 3.13169859
Iteration 64, loss = 3.42576173
Iteration 65, loss = 3.46597331
Iteration 66, loss = 3.20092082
Iteration 67, loss = 3.10635538
Iteration 68, loss = 3.52646442
Iteration 69, loss = 3.06230683
Iteration 70, loss = 2.86341877
Iteration 71, loss = 2.88505940
Iteration 72, loss = 3.06833323
Iteration 73, loss = 2.84887338
Iteration 74, loss = 2.89737428
Iteration 75, loss = 2.70516788
Iteration 76, loss = 2.60344427
Iteration 77, loss = 2.62588714
Iteration 78, loss = 2.33817895
Iteration 79, loss = 2.39146474
Iteration 80, loss = 2.06912009
Iteration 81, loss = 2.54460999
Iteration 82, loss = 2.40615377
Iteration 83, loss = 2.33022256
Iteration 84, loss = 2.57189492
Iteration 85, loss = 2.34457566
Iteration 86, loss = 2.18467567
Iteration 87, loss = 2.30395947
Iteration 88, loss = 2.18497519
Iteration 89, loss = 2.55834413
Iteration 90, loss = 1.90368710
Iteration 91, loss = 2.06991748
Iteration 92, loss = 2.31870579
Iteration 93, loss = 2.25603374
Iteration 94, loss = 1.98041704
Iteration 95, loss = 2.29353423
Iteration 96, loss = 1.76867961
Iteration 97, loss = 1.81758968
Iteration 98, loss = 1.87087117
Iteration 99, loss = 1.92701086
Iteration 100, loss = 0.65781959
Iteration 101, loss = 0.54696054
Iteration 102, loss = 0.54375817
Iteration 103, loss = 0.51813979
Iteration 104, loss = 0.83907364
Iteration 105, loss = 0.61254032
Iteration 106, loss = 0.49994282
Iteration 107, loss = 0.58683921
Iteration 108, loss = 2.12633972
Iteration 109, loss = 1.75688179
Iteration 110, loss = 1.94958066
Iteration 111, loss = 0.56837773
Iteration 112, loss = 1.74804351
Iteration 113, loss = 2.09393975
Iteration 114, loss = 1.54476867
Iteration 115, loss = 1.88652995
Iteration 116, loss = 1.51599265
Iteration 117, loss = 1.94524552
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 13.93923484
Iteration 2, loss = 13.17289562
Iteration 3, loss = 11.30452748
Iteration 4, loss = 9.07704072
Iteration 5, loss = 8.89918858
Iteration 6, loss = 8.73571004
Iteration 7, loss = 8.51011477
Iteration 8, loss = 8.24629946
Iteration 9, loss = 7.96230175
Iteration 10, loss = 7.48846311
Iteration 11, loss = 6.87555752
Iteration 12, loss = 6.26482814
Iteration 13, loss = 5.57992970
Iteration 14, loss = 4.85491131
Iteration 15, loss = 4.08045019
Iteration 16, loss = 3.12453450
Iteration 17, loss = 2.09085918
Iteration 18, loss = 1.58670984
Iteration 19, loss = 1.49712591
Iteration 20, loss = 1.47878656
Iteration 21, loss = 1.45461408
Iteration 22, loss = 1.43484058
Iteration 23, loss = 1.40897891
Iteration 24, loss = 1.38043802
Iteration 25, loss = 1.33845592
Iteration 26, loss = 1.31377877
Iteration 27, loss = 1.28269153
Iteration 28, loss = 1.27192878
Iteration 29, loss = 1.25467431
Iteration 30, loss = 1.22397196
Iteration 31, loss = 1.21020434
Iteration 32, loss = 1.18639623
Iteration 33, loss = 1.15591664
Iteration 34, loss = 1.13556099
Iteration 35, loss = 1.10367923
Iteration 36, loss = 1.09162310
Iteration 37, loss = 1.05337640
Iteration 38, loss = 1.01610580
Iteration 39, loss = 0.94341787
Iteration 40, loss = 0.89962153
Iteration 41, loss = 0.86251746
Iteration 42, loss = 0.81301939
Iteration 43, loss = 0.75323067
Iteration 44, loss = 0.73806324
Iteration 45, loss = 0.73804959
Iteration 46, loss = 0.73468889
Iteration 47, loss = 0.73467790
Iteration 48, loss = 0.73467244
Iteration 49, loss = 0.73467084
Iteration 50, loss = 0.73467361
Iteration 51, loss = 0.73466668
Iteration 52, loss = 0.73466912
Iteration 53, loss = 0.73466422
Iteration 54, loss = 0.73465989
Iteration 55, loss = 0.73473295
Iteration 56, loss = 0.73472833
Iteration 57, loss = 0.73473194
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.39843841
Iteration 2, loss = 0.24113695
Iteration 3, loss = 0.26455530
Iteration 4, loss = 0.22321544
Iteration 5, loss = 0.20552557
Iteration 6, loss = 0.18300558
Iteration 7, loss = 0.15090861
Iteration 8, loss = 0.15758380
Iteration 9, loss = 0.15483568
Iteration 10, loss = 0.14799706
Iteration 11, loss = 0.14540337
Iteration 12, loss = 0.14436171
Iteration 13, loss = 0.14209690
Iteration 14, loss = 0.14572947
Iteration 15, loss = 0.14530367
Iteration 16, loss = 0.14563683
Iteration 17, loss = 0.17291588
Iteration 18, loss = 0.17157562
Iteration 19, loss = 0.16723363
Iteration 20, loss = 0.16650416
Iteration 21, loss = 0.16305905
Iteration 22, loss = 0.16404910
Iteration 23, loss = 0.16136078
Iteration 24, loss = 0.16193538
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.55326216
Iteration 2, loss = 0.27355019
Iteration 3, loss = 0.20170136
Iteration 4, loss = 0.17460623
Iteration 5, loss = 0.16341252
Iteration 6, loss = 0.14889128
Iteration 7, loss = 0.15291138
Iteration 8, loss = 0.15238138
Iteration 9, loss = 0.15946267
Iteration 10, loss = 0.13870290
Iteration 11, loss = 0.13165180
Iteration 12, loss = 0.12826418
Iteration 13, loss = 0.12580509
Iteration 14, loss = 0.12521936
Iteration 15, loss = 0.18001281
Iteration 16, loss = 0.19040455
Iteration 17, loss = 0.16820440
Iteration 18, loss = 0.16793506
Iteration 19, loss = 0.19962491
Iteration 20, loss = 0.19168650
Iteration 21, loss = 0.16787113
Iteration 22, loss = 0.17761794
Iteration 23, loss = 0.15771120
Iteration 24, loss = 0.16592801
Iteration 25, loss = 0.17957858
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.43026297
Iteration 2, loss = 0.23929893
Iteration 3, loss = 0.21300594
Iteration 4, loss = 0.18609741
Iteration 5, loss = 0.18974078
Iteration 6, loss = 0.17033413
Iteration 7, loss = 0.17608572
Iteration 8, loss = 0.18119039
Iteration 9, loss = 0.17711522
Iteration 10, loss = 0.18182713
Iteration 11, loss = 0.16996595
Iteration 12, loss = 0.16642799
Iteration 13, loss = 0.15071036
Iteration 14, loss = 0.16281889
Iteration 15, loss = 0.16483538
Iteration 16, loss = 0.16370227
Iteration 17, loss = 0.16807617
Iteration 18, loss = 0.14575365
Iteration 19, loss = 0.17229934
Iteration 20, loss = 0.16468063
Iteration 21, loss = 0.16456477
Iteration 22, loss = 0.16934509
Iteration 23, loss = 0.16744488
Iteration 24, loss = 0.18945439
Iteration 25, loss = 0.18562359
Iteration 26, loss = 0.18061015
Iteration 27, loss = 0.18131293
Iteration 28, loss = 0.17749843
Iteration 29, loss = 0.17479031
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.42631720
Iteration 2, loss = 0.23484731
Iteration 3, loss = 0.19898942
Iteration 4, loss = 0.18366306
Iteration 5, loss = 0.16201763
Iteration 6, loss = 0.16012253
Iteration 7, loss = 0.13824067
Iteration 8, loss = 0.13666659
Iteration 9, loss = 0.13809491
Iteration 10, loss = 0.15154236
Iteration 11, loss = 0.14117933
Iteration 12, loss = 0.14633828
Iteration 13, loss = 0.17565968
Iteration 14, loss = 0.16968642
Iteration 15, loss = 0.16807323
Iteration 16, loss = 0.17093536
Iteration 17, loss = 0.16690470
Iteration 18, loss = 0.17034017
Iteration 19, loss = 0.17038735
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.33075595
Iteration 2, loss = 0.20100418
Iteration 3, loss = 0.19068475
Iteration 4, loss = 0.15017046
Iteration 5, loss = 0.13985514
Iteration 6, loss = 0.12003101
Iteration 7, loss = 0.11664490
Iteration 8, loss = 0.11936175
Iteration 9, loss = 0.11383727
Iteration 10, loss = 0.10661691
Iteration 11, loss = 0.11626686
Iteration 12, loss = 0.12863788
Iteration 13, loss = 0.11892844
Iteration 14, loss = 0.12193101
Iteration 15, loss = 0.17950658
Iteration 16, loss = 0.19244448
Iteration 17, loss = 0.17661762
Iteration 18, loss = 0.17364196
Iteration 19, loss = 0.17506589
Iteration 20, loss = 0.18247957
Iteration 21, loss = 0.17469514
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.64689014
Iteration 2, loss = 0.51459386
Iteration 3, loss = 0.38865066
Iteration 4, loss = 0.28862065
Iteration 5, loss = 0.26272753
Iteration 6, loss = 0.28524748
Iteration 7, loss = 0.26393391
Iteration 8, loss = 0.25123138
Iteration 9, loss = 0.20350878
Iteration 10, loss = 0.19475529
Iteration 11, loss = 0.20893458
Iteration 12, loss = 0.22769538
Iteration 13, loss = 0.22473730
Iteration 14, loss = 0.23529967
Iteration 15, loss = 0.24803231
Iteration 16, loss = 0.27725947
Iteration 17, loss = 0.24495061
Iteration 18, loss = 0.23222066
Iteration 19, loss = 0.21797189
Iteration 20, loss = 0.22973720
Iteration 21, loss = 0.22580786
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.60638340
Iteration 2, loss = 0.41356535
Iteration 3, loss = 0.26874724
Iteration 4, loss = 0.22512278
Iteration 5, loss = 0.20342287
Iteration 6, loss = 0.20319316
Iteration 7, loss = 0.18646597
Iteration 8, loss = 0.18005453
Iteration 9, loss = 0.18512910
Iteration 10, loss = 0.20066359
Iteration 11, loss = 0.20650650
Iteration 12, loss = 0.22908093
Iteration 13, loss = 0.20382329
Iteration 14, loss = 0.19122835
Iteration 15, loss = 0.17879761
Iteration 16, loss = 0.17926422
Iteration 17, loss = 0.18039829
Iteration 18, loss = 0.19942897
Iteration 19, loss = 0.18440677
Iteration 20, loss = 0.17797519
Iteration 21, loss = 0.15994769
Iteration 22, loss = 0.15427201
Iteration 23, loss = 0.17633108
Iteration 24, loss = 0.20387508
Iteration 25, loss = 0.19879876
Iteration 26, loss = 0.21288478
Iteration 27, loss = 0.23886117
Iteration 28, loss = 0.21918342
Iteration 29, loss = 0.21164424
Iteration 30, loss = 0.22097013
Iteration 31, loss = 0.25327874
Iteration 32, loss = 0.23280147
Iteration 33, loss = 0.22749719
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.63122017
Iteration 2, loss = 0.46588153
Iteration 3, loss = 0.32543642
Iteration 4, loss = 0.26581909
Iteration 5, loss = 0.23961701
Iteration 6, loss = 0.22242753
Iteration 7, loss = 0.20118743
Iteration 8, loss = 0.20202190
Iteration 9, loss = 0.18616839
Iteration 10, loss = 0.18730168
Iteration 11, loss = 0.16722629
Iteration 12, loss = 0.18872290
Iteration 13, loss = 0.18854384
Iteration 14, loss = 0.20007538
Iteration 15, loss = 0.20876413
Iteration 16, loss = 0.19777879
Iteration 17, loss = 0.21249610
Iteration 18, loss = 0.20809462
Iteration 19, loss = 0.19764817
Iteration 20, loss = 0.23597917
Iteration 21, loss = 0.22240190
Iteration 22, loss = 0.26433770
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.65788452
Iteration 2, loss = 0.53200876
Iteration 3, loss = 0.39616831
Iteration 4, loss = 0.29967804
Iteration 5, loss = 0.22542740
Iteration 6, loss = 0.20004988
Iteration 7, loss = 0.19356786
Iteration 8, loss = 0.19220950
Iteration 9, loss = 0.17850163
Iteration 10, loss = 0.16745269
Iteration 11, loss = 0.16454447
Iteration 12, loss = 0.15959790
Iteration 13, loss = 0.15487759
Iteration 14, loss = 0.14488181
Iteration 15, loss = 0.14913759
Iteration 16, loss = 0.14861174
Iteration 17, loss = 0.14478415
Iteration 18, loss = 0.15427898
Iteration 19, loss = 0.15093135
Iteration 20, loss = 0.15066365
Iteration 21, loss = 0.17712651
Iteration 22, loss = 0.16053019
Iteration 23, loss = 0.14849289
Iteration 24, loss = 0.15524640
Iteration 25, loss = 0.17218633
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.62568451
Iteration 2, loss = 0.48330821
Iteration 3, loss = 0.35868864
Iteration 4, loss = 0.27613518
Iteration 5, loss = 0.24030276
Iteration 6, loss = 0.21844098
Iteration 7, loss = 0.21382126
Iteration 8, loss = 0.21239222
Iteration 9, loss = 0.22101817
Iteration 10, loss = 0.20882927
Iteration 11, loss = 0.21524568
Iteration 12, loss = 0.21481983
Iteration 13, loss = 0.22071685
Iteration 14, loss = 0.22064289
Iteration 15, loss = 0.24741630
Iteration 16, loss = 0.24949082
Iteration 17, loss = 0.24657488
Iteration 18, loss = 0.24415523
Iteration 19, loss = 0.24339686
Iteration 20, loss = 0.23651644
Iteration 21, loss = 0.23236742
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 5.30789736
Iteration 2, loss = 3.72431078
Iteration 3, loss = 4.04696394
Iteration 4, loss = 3.47968820
Iteration 5, loss = 3.55710693
Iteration 6, loss = 3.07297808
Iteration 7, loss = 2.99632163
Iteration 8, loss = 2.70673540
Iteration 9, loss = 3.30851912
Iteration 10, loss = 2.50715754
Iteration 11, loss = 2.71383271
Iteration 12, loss = 2.54762620
Iteration 13, loss = 2.90250028
Iteration 14, loss = 2.31133372
Iteration 15, loss = 2.43006460
Iteration 16, loss = 2.30587045
Iteration 17, loss = 2.51424138
Iteration 18, loss = 3.37698312
Iteration 19, loss = 2.14843902
Iteration 20, loss = 2.66482117
Iteration 21, loss = 2.21574262
Iteration 22, loss = 2.77483042
Iteration 23, loss = 2.07644895
Iteration 24, loss = 2.18624522
Iteration 25, loss = 2.53048106
Iteration 26, loss = 2.05572477
Iteration 27, loss = 2.24868763
Iteration 28, loss = 2.53917690
Iteration 29, loss = 1.86930850
Iteration 30, loss = 2.93316195
Iteration 31, loss = 2.87435614
Iteration 32, loss = 2.40844503
Iteration 33, loss = 2.37540079
Iteration 34, loss = 2.72618788
Iteration 35, loss = 2.70844173
Iteration 36, loss = 1.97984618
Iteration 37, loss = 3.27934160
Iteration 38, loss = 2.04511740
Iteration 39, loss = 2.65910207
Iteration 40, loss = 2.24094833
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 13.59943440
Iteration 2, loss = 8.56010042
Iteration 3, loss = 6.56597356
Iteration 4, loss = 5.89971299
Iteration 5, loss = 5.90311813
Iteration 6, loss = 6.15044270
Iteration 7, loss = 4.72491493
Iteration 8, loss = 4.00054147
Iteration 9, loss = 3.94965085
Iteration 10, loss = 4.07390836
Iteration 11, loss = 3.34688440
Iteration 12, loss = 4.11257529
Iteration 13, loss = 3.70399775
Iteration 14, loss = 3.46861735
Iteration 15, loss = 3.52008127
Iteration 16, loss = 3.25250783
Iteration 17, loss = 3.66468384
Iteration 18, loss = 3.06708556
Iteration 19, loss = 3.11181146
Iteration 20, loss = 3.64572830
Iteration 21, loss = 3.10343348
Iteration 22, loss = 3.40709112
Iteration 23, loss = 3.46586047
Iteration 24, loss = 3.38274845
Iteration 25, loss = 3.33639310
Iteration 26, loss = 2.96529044
Iteration 27, loss = 3.00199713
Iteration 28, loss = 3.14084987
Iteration 29, loss = 3.10605764
Iteration 30, loss = 3.14324569
Iteration 31, loss = 3.22081787
Iteration 32, loss = 3.97535245
Iteration 33, loss = 2.99669861
Iteration 34, loss = 2.82818730
Iteration 35, loss = 2.89008199
Iteration 36, loss = 3.05823611
Iteration 37, loss = 2.36858096
Iteration 38, loss = 4.63782431
Iteration 39, loss = 2.48314815
Iteration 40, loss = 2.79156266
Iteration 41, loss = 3.13879600
Iteration 42, loss = 2.20733958
Iteration 43, loss = 2.62858421
Iteration 44, loss = 2.71708785
Iteration 45, loss = 2.61463567
Iteration 46, loss = 2.45811109
Iteration 47, loss = 2.22011442
Iteration 48, loss = 2.58852225
Iteration 49, loss = 2.72788367
Iteration 50, loss = 2.65891470
Iteration 51, loss = 2.47997687
Iteration 52, loss = 4.60801778
Iteration 53, loss = 2.57194230
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 8.61594985
Iteration 2, loss = 5.47656123
Iteration 3, loss = 4.23232069
Iteration 4, loss = 3.66077422
Iteration 5, loss = 3.58294343
Iteration 6, loss = 3.79617079
Iteration 7, loss = 3.48555894
Iteration 8, loss = 2.97278455
Iteration 9, loss = 3.20937686
Iteration 10, loss = 3.59067488
Iteration 11, loss = 3.84328949
Iteration 12, loss = 3.05145634
Iteration 13, loss = 3.72272613
Iteration 14, loss = 3.20426798
Iteration 15, loss = 2.49354504
Iteration 16, loss = 2.84700005
Iteration 17, loss = 2.69993503
Iteration 18, loss = 3.12903421
Iteration 19, loss = 2.84352058
Iteration 20, loss = 2.70672423
Iteration 21, loss = 2.96590139
Iteration 22, loss = 3.08940527
Iteration 23, loss = 2.76295336
Iteration 24, loss = 2.41233022
Iteration 25, loss = 2.61211039
Iteration 26, loss = 2.76098666
Iteration 27, loss = 2.90649716
Iteration 28, loss = 3.40625830
Iteration 29, loss = 3.14128209
Iteration 30, loss = 2.63174114
Iteration 31, loss = 2.97125952
Iteration 32, loss = 3.74708121
Iteration 33, loss = 2.78391168
Iteration 34, loss = 3.81233227
Iteration 35, loss = 3.85797075
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 14.21697516
Iteration 2, loss = 8.16818835
Iteration 3, loss = 5.76021080
Iteration 4, loss = 5.70384377
Iteration 5, loss = 4.68921739
Iteration 6, loss = 3.38178082
Iteration 7, loss = 2.91785376
Iteration 8, loss = 2.69863567
Iteration 9, loss = 2.85628366
Iteration 10, loss = 3.12754630
Iteration 11, loss = 2.67303327
Iteration 12, loss = 2.68856361
Iteration 13, loss = 2.92218756
Iteration 14, loss = 2.91586651
Iteration 15, loss = 3.22033776
Iteration 16, loss = 2.73842617
Iteration 17, loss = 3.02068512
Iteration 18, loss = 3.04286716
Iteration 19, loss = 2.73174091
Iteration 20, loss = 2.29699600
Iteration 21, loss = 2.81909671
Iteration 22, loss = 2.99959199
Iteration 23, loss = 2.47908344
Iteration 24, loss = 2.45638867
Iteration 25, loss = 2.60906159
Iteration 26, loss = 2.11591087
Iteration 27, loss = 2.87968023
Iteration 28, loss = 2.36731076
Iteration 29, loss = 2.21403232
Iteration 30, loss = 2.53560658
Iteration 31, loss = 2.38818376
Iteration 32, loss = 2.46210365
Iteration 33, loss = 2.18335788
Iteration 34, loss = 3.05480567
Iteration 35, loss = 2.64373675
Iteration 36, loss = 2.65662959
Iteration 37, loss = 2.78181039
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 11.06731977
Iteration 2, loss = 7.92168873
Iteration 3, loss = 7.04076405
Iteration 4, loss = 5.92955714
Iteration 5, loss = 6.16731021
Iteration 6, loss = 5.48672940
Iteration 7, loss = 3.75800746
Iteration 8, loss = 3.64083015
Iteration 9, loss = 3.54690250
Iteration 10, loss = 3.76702708
Iteration 11, loss = 4.43516135
Iteration 12, loss = 3.24974757
Iteration 13, loss = 3.69856012
Iteration 14, loss = 3.38268782
Iteration 15, loss = 3.96443094
Iteration 16, loss = 3.83437262
Iteration 17, loss = 3.78480765
Iteration 18, loss = 3.38224347
Iteration 19, loss = 3.80737784
Iteration 20, loss = 3.50997680
Iteration 21, loss = 3.82515297
Iteration 22, loss = 2.57187846
Iteration 23, loss = 3.77829467
Iteration 24, loss = 3.43672686
Iteration 25, loss = 5.30643934
Iteration 26, loss = 3.37738133
Iteration 27, loss = 2.74690765
Iteration 28, loss = 2.59662338
Iteration 29, loss = 3.74441843
Iteration 30, loss = 3.06237722
Iteration 31, loss = 3.30427330
Iteration 32, loss = 2.14092350
Iteration 33, loss = 2.29600780
Iteration 34, loss = 3.43131979
Iteration 35, loss = 2.26037524
Iteration 36, loss = 3.12492373
Iteration 37, loss = 2.90370284
Iteration 38, loss = 2.97018113
Iteration 39, loss = 2.05713602
Iteration 40, loss = 3.29016677
Iteration 41, loss = 3.25718108
Iteration 42, loss = 3.21265598
Iteration 43, loss = 2.66058998
Iteration 44, loss = 2.59759152
Iteration 45, loss = 3.14158272
Iteration 46, loss = 2.19623726
Iteration 47, loss = 3.73100328
Iteration 48, loss = 2.96461876
Iteration 49, loss = 2.47269951
Iteration 50, loss = 2.71973681
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.49256971
Iteration 2, loss = 0.26569567
Iteration 3, loss = 0.22289968
Iteration 4, loss = 0.19643100
Iteration 5, loss = 0.17272679
Iteration 6, loss = 0.16492681
Iteration 7, loss = 0.16754652
Iteration 8, loss = 0.18216279
Iteration 9, loss = 0.18143251
Iteration 10, loss = 0.18228554
Iteration 11, loss = 0.20954865
Iteration 12, loss = 0.21423675
Iteration 13, loss = 0.20999726
Iteration 14, loss = 0.20776009
Iteration 15, loss = 0.20697556
Iteration 16, loss = 0.20510949
Iteration 17, loss = 0.21036651
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.51903398
Iteration 2, loss = 0.27911568
Iteration 3, loss = 0.20803544
Iteration 4, loss = 0.20865710
Iteration 5, loss = 0.20117417
Iteration 6, loss = 0.22794576
Iteration 7, loss = 0.21765589
Iteration 8, loss = 0.20529201
Iteration 9, loss = 0.21606813
Iteration 10, loss = 0.21407535
Iteration 11, loss = 0.20371779
Iteration 12, loss = 0.20793266
Iteration 13, loss = 0.22629880
Iteration 14, loss = 0.20746640
Iteration 15, loss = 0.22020416
Iteration 16, loss = 0.21276003
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.55595033
Iteration 2, loss = 0.30765769
Iteration 3, loss = 0.20596080
Iteration 4, loss = 0.18450702
Iteration 5, loss = 0.16565679
Iteration 6, loss = 0.17758903
Iteration 7, loss = 0.21813446
Iteration 8, loss = 0.22653844
Iteration 9, loss = 0.21395588
Iteration 10, loss = 0.20028024
Iteration 11, loss = 0.18946095
Iteration 12, loss = 0.19555631
Iteration 13, loss = 0.18608764
Iteration 14, loss = 0.18238264
Iteration 15, loss = 0.18664124
Iteration 16, loss = 0.20654006
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.53267424
Iteration 2, loss = 0.28936223
Iteration 3, loss = 0.21233971
Iteration 4, loss = 0.21164884
Iteration 5, loss = 0.18217590
Iteration 6, loss = 0.19287798
Iteration 7, loss = 0.19347742
Iteration 8, loss = 0.18708700
Iteration 9, loss = 0.18688121
Iteration 10, loss = 0.18960416
Iteration 11, loss = 0.18947555
Iteration 12, loss = 0.19203594
Iteration 13, loss = 0.21917792
Iteration 14, loss = 0.22804239
Iteration 15, loss = 0.21732113
Iteration 16, loss = 0.20995846
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.52699429
Iteration 2, loss = 0.28448399
Iteration 3, loss = 0.20899249
Iteration 4, loss = 0.22008515
Iteration 5, loss = 0.20014247
Iteration 6, loss = 0.17672706
Iteration 7, loss = 0.18701032
Iteration 8, loss = 0.17253318
Iteration 9, loss = 0.17804134
Iteration 10, loss = 0.19707925
Iteration 11, loss = 0.18758783
Iteration 12, loss = 0.16974998
Iteration 13, loss = 0.20936872
Iteration 14, loss = 0.22260692
Iteration 15, loss = 0.21419950
Iteration 16, loss = 0.20824675
Iteration 17, loss = 0.20680226
Iteration 18, loss = 0.20831128
Iteration 19, loss = 0.20687050
Iteration 20, loss = 0.19083517
Iteration 21, loss = 0.20598229
Iteration 22, loss = 0.19141623
Iteration 23, loss = 0.16400122
Iteration 24, loss = 0.22111567
Iteration 25, loss = 0.21642937
Iteration 26, loss = 0.19487893
Iteration 27, loss = 0.21683261
Iteration 28, loss = 0.21280561
Iteration 29, loss = 0.21190762
Iteration 30, loss = 0.21301951
Iteration 31, loss = 0.21125947
Iteration 32, loss = 0.20782534
Iteration 33, loss = 0.20026210
Iteration 34, loss = 0.19999710
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 3.41880729
Iteration 2, loss = 3.06265307
Iteration 3, loss = 3.07243950
Iteration 4, loss = 2.73023875
Iteration 5, loss = 2.65454264
Iteration 6, loss = 2.25721660
Iteration 7, loss = 2.02995031
Iteration 8, loss = 2.04620371
Iteration 9, loss = 1.92344835
Iteration 10, loss = 2.02376328
Iteration 11, loss = 1.97310187
Iteration 12, loss = 1.88262302
Iteration 13, loss = 1.80277226
Iteration 14, loss = 1.85411949
Iteration 15, loss = 2.02895215
Iteration 16, loss = 1.78542529
Iteration 17, loss = 1.75661467
Iteration 18, loss = 1.94027065
Iteration 19, loss = 1.88327651
Iteration 20, loss = 1.93092442
Iteration 21, loss = 2.10127044
Iteration 22, loss = 1.92217608
Iteration 23, loss = 1.99061248
Iteration 24, loss = 2.03074318
Iteration 25, loss = 2.01795055
Iteration 26, loss = 1.76494644
Iteration 27, loss = 1.80765908
Iteration 28, loss = 1.72572060
Iteration 29, loss = 1.83149008
Iteration 30, loss = 1.95682183
Iteration 31, loss = 2.15894832
Iteration 32, loss = 1.78991017
Iteration 33, loss = 2.00830157
Iteration 34, loss = 1.69854860
Iteration 35, loss = 1.75790477
Iteration 36, loss = 2.05199392
Iteration 37, loss = 1.75517853
Iteration 38, loss = 1.87317223
Iteration 39, loss = 1.70656961
Iteration 40, loss = 1.62939135
Iteration 41, loss = 1.63377311
Iteration 42, loss = 1.73171643
Iteration 43, loss = 1.91241044
Iteration 44, loss = 1.81118418
Iteration 45, loss = 1.86045209
Iteration 46, loss = 1.73996414
Iteration 47, loss = 2.00770656
Iteration 48, loss = 1.77065001
Iteration 49, loss = 1.74858054
Iteration 50, loss = 1.70313467
Iteration 51, loss = 1.79084731
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 2.90362389
Iteration 2, loss = 2.55893084
Iteration 3, loss = 2.34754324
Iteration 4, loss = 2.06671609
Iteration 5, loss = 1.79936686
Iteration 6, loss = 1.59042755
Iteration 7, loss = 1.31628336
Iteration 8, loss = 1.06152746
Iteration 9, loss = 0.85831661
Iteration 10, loss = 0.72484089
Iteration 11, loss = 0.72147876
Iteration 12, loss = 0.71920194
Iteration 13, loss = 0.71763142
Iteration 14, loss = 0.71668150
Iteration 15, loss = 0.71605293
Iteration 16, loss = 0.71558306
Iteration 17, loss = 0.71526997
Iteration 18, loss = 0.71514607
Iteration 19, loss = 0.71500745
Iteration 20, loss = 0.71499816
Iteration 21, loss = 0.71496866
Iteration 22, loss = 0.71491844
Iteration 23, loss = 0.71489062
Iteration 24, loss = 0.71501328
Iteration 25, loss = 0.71500724
Iteration 26, loss = 0.71500012
Iteration 27, loss = 0.71507303
Iteration 28, loss = 0.71520672
Iteration 29, loss = 0.71520427
Iteration 30, loss = 0.71523187
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 7.84318490
Iteration 2, loss = 7.18105091
Iteration 3, loss = 6.17110646
Iteration 4, loss = 4.97215413
Iteration 5, loss = 3.94840876
Iteration 6, loss = 2.88295962
Iteration 7, loss = 1.72007298
Iteration 8, loss = 0.92397518
Iteration 9, loss = 0.88665972
Iteration 10, loss = 0.88423093
Iteration 11, loss = 0.86971509
Iteration 12, loss = 0.86227156
Iteration 13, loss = 0.85172898
Iteration 14, loss = 0.84653789
Iteration 15, loss = 0.79569786
Iteration 16, loss = 0.78568574
Iteration 17, loss = 0.77846415
Iteration 18, loss = 0.80527349
Iteration 19, loss = 0.82818617
Iteration 20, loss = 0.82393617
Iteration 21, loss = 0.82337895
Iteration 22, loss = 0.81417548
Iteration 23, loss = 0.80571482
Iteration 24, loss = 0.80410758
Iteration 25, loss = 0.80408118
Iteration 26, loss = 0.80407201
Iteration 27, loss = 0.80405874
Iteration 28, loss = 0.80358915
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 4.11889879
Iteration 2, loss = 3.33956578
Iteration 3, loss = 3.03117793
Iteration 4, loss = 2.73429900
Iteration 5, loss = 2.46762878
Iteration 6, loss = 2.19081290
Iteration 7, loss = 1.87761991
Iteration 8, loss = 1.59071737
Iteration 9, loss = 1.24879963
Iteration 10, loss = 0.98111523
Iteration 11, loss = 0.85904534
Iteration 12, loss = 0.79462059
Iteration 13, loss = 0.79004386
Iteration 14, loss = 0.78406870
Iteration 15, loss = 0.77841107
Iteration 16, loss = 0.77483609
Iteration 17, loss = 0.77153713
Iteration 18, loss = 0.76853652
Iteration 19, loss = 0.76591015
Iteration 20, loss = 0.76355992
Iteration 21, loss = 0.76146859
Iteration 22, loss = 0.75982079
Iteration 23, loss = 0.75153519
Iteration 24, loss = 0.75008431
Iteration 25, loss = 0.74887658
Iteration 26, loss = 0.74774031
Iteration 27, loss = 0.74686123
Iteration 28, loss = 0.74600421
Iteration 29, loss = 0.74538890
Iteration 30, loss = 0.74478054
Iteration 31, loss = 0.74096408
Iteration 32, loss = 0.74053145
Iteration 33, loss = 0.74022985
Iteration 34, loss = 0.74005839
Iteration 35, loss = 0.73986576
Iteration 36, loss = 0.73973506
Iteration 37, loss = 0.73966324
Iteration 38, loss = 0.73964246
Iteration 39, loss = 0.73960623
Iteration 40, loss = 0.73966188
Iteration 41, loss = 0.73972508
Iteration 42, loss = 0.73974695
Iteration 43, loss = 0.73984358
Iteration 44, loss = 0.73981419
Iteration 45, loss = 0.73979469
Iteration 46, loss = 0.73978957
Iteration 47, loss = 0.73984184
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 15.54887225
Iteration 2, loss = 12.83077940
Iteration 3, loss = 9.47736764
Iteration 4, loss = 9.44932196
Iteration 5, loss = 9.43955690
Iteration 6, loss = 9.29697471
Iteration 7, loss = 8.73470883
Iteration 8, loss = 6.08009419
Iteration 9, loss = 2.39385653
Iteration 10, loss = 2.04809230
Iteration 11, loss = 2.43569756
Iteration 12, loss = 2.06134026
Iteration 13, loss = 1.99391433
Iteration 14, loss = 2.29850529
Iteration 15, loss = 2.65653485
Iteration 16, loss = 3.62094528
Iteration 17, loss = 2.35169931
Iteration 18, loss = 2.28491568
Iteration 19, loss = 2.36972053
Iteration 20, loss = 2.00310119
Iteration 21, loss = 2.21085882
Iteration 22, loss = 2.36819387
Iteration 23, loss = 2.21348910
Iteration 24, loss = 1.64547236
Iteration 25, loss = 2.02058019
Iteration 26, loss = 2.08082672
Iteration 27, loss = 1.61145436
Iteration 28, loss = 1.58891142
Iteration 29, loss = 1.55994264
Iteration 30, loss = 1.45886881
Iteration 31, loss = 1.46165418
Iteration 32, loss = 1.42813116
Iteration 33, loss = 1.46732883
Iteration 34, loss = 1.36045123
Iteration 35, loss = 1.45586296
Iteration 36, loss = 1.60558714
Iteration 37, loss = 1.57056347
Iteration 38, loss = 1.41551769
Iteration 39, loss = 1.24559713
Iteration 40, loss = 1.38200321
Iteration 41, loss = 1.44688506
Iteration 42, loss = 1.33719023
Iteration 43, loss = 1.43853703
Iteration 44, loss = 1.43394440
Iteration 45, loss = 1.24791849
Iteration 46, loss = 1.44039395
Iteration 47, loss = 1.43978260
Iteration 48, loss = 1.33182710
Iteration 49, loss = 1.41022720
Iteration 50, loss = 1.36428810
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.65987931
Iteration 2, loss = 0.55291148
Iteration 3, loss = 0.47903756
Iteration 4, loss = 0.40057626
Iteration 5, loss = 0.38680670
Iteration 6, loss = 0.38107955
Iteration 7, loss = 0.31856340
Iteration 8, loss = 0.34767177
Iteration 9, loss = 0.33946246
Iteration 10, loss = 0.31199562
Iteration 11, loss = 0.28766668
Iteration 12, loss = 0.29116252
Iteration 13, loss = 0.30340458
Iteration 14, loss = 0.30785994
Iteration 15, loss = 0.29783032
Iteration 16, loss = 0.29182693
Iteration 17, loss = 0.32621427
Iteration 18, loss = 0.31220326
Iteration 19, loss = 0.30323684
Iteration 20, loss = 0.28477721
Iteration 21, loss = 0.27988337
Iteration 22, loss = 0.27704681
Iteration 23, loss = 0.27964256
Iteration 24, loss = 0.30176191
Iteration 25, loss = 0.28078339
Iteration 26, loss = 0.27062568
Iteration 27, loss = 0.30064386
Iteration 28, loss = 0.31334829
Iteration 29, loss = 0.34458485
Iteration 30, loss = 0.33729957
Iteration 31, loss = 0.32187475
Iteration 32, loss = 0.30490132
Iteration 33, loss = 0.30195075
Iteration 34, loss = 0.27784140
Iteration 35, loss = 0.26439075
Iteration 36, loss = 0.25903874
Iteration 37, loss = 0.25830891
Iteration 38, loss = 0.25577431
Iteration 39, loss = 0.24688204
Iteration 40, loss = 0.24560472
Iteration 41, loss = 0.27520059
Iteration 42, loss = 0.26487121
Iteration 43, loss = 0.26314236
Iteration 44, loss = 0.25370902
Iteration 45, loss = 0.25457862
Iteration 46, loss = 0.23925556
Iteration 47, loss = 0.22426312
Iteration 48, loss = 0.21869627
Iteration 49, loss = 0.21461966
Iteration 50, loss = 0.21619049
Iteration 51, loss = 0.21615310
Iteration 52, loss = 0.21781808
Iteration 53, loss = 0.21855862
Iteration 54, loss = 0.21908966
Iteration 55, loss = 0.22037962
Iteration 56, loss = 0.21817953
Iteration 57, loss = 0.25148540
Iteration 58, loss = 0.23568895
Iteration 59, loss = 0.22871422
Iteration 60, loss = 0.22874895
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.61274190
Iteration 2, loss = 0.51985461
Iteration 3, loss = 0.47198794
Iteration 4, loss = 0.39859781
Iteration 5, loss = 0.35649879
Iteration 6, loss = 0.32122698
Iteration 7, loss = 0.31776215
Iteration 8, loss = 0.33175575
Iteration 9, loss = 0.30668754
Iteration 10, loss = 0.28943661
Iteration 11, loss = 0.27245519
Iteration 12, loss = 0.27537535
Iteration 13, loss = 0.27328846
Iteration 14, loss = 0.26984304
Iteration 15, loss = 0.27149704
Iteration 16, loss = 0.26675945
Iteration 17, loss = 0.26842074
Iteration 18, loss = 0.27252384
Iteration 19, loss = 0.26703964
Iteration 20, loss = 0.24800839
Iteration 21, loss = 0.25840634
Iteration 22, loss = 0.26522095
Iteration 23, loss = 0.27048972
Iteration 24, loss = 0.26846771
Iteration 25, loss = 0.25755566
Iteration 26, loss = 0.25278453
Iteration 27, loss = 0.26735731
Iteration 28, loss = 0.26549792
Iteration 29, loss = 0.26216803
Iteration 30, loss = 0.24953912
Iteration 31, loss = 0.25793526
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.60397983
Iteration 2, loss = 0.52099679
Iteration 3, loss = 0.43716879
Iteration 4, loss = 0.40704290
Iteration 5, loss = 0.39497759
Iteration 6, loss = 0.36796556
Iteration 7, loss = 0.34169433
Iteration 8, loss = 0.31734473
Iteration 9, loss = 0.31983521
Iteration 10, loss = 0.31122895
Iteration 11, loss = 0.30980930
Iteration 12, loss = 0.31087822
Iteration 13, loss = 0.31439720
Iteration 14, loss = 0.31893568
Iteration 15, loss = 0.31418437
Iteration 16, loss = 0.29055274
Iteration 17, loss = 0.26074770
Iteration 18, loss = 0.26931614
Iteration 19, loss = 0.25101941
Iteration 20, loss = 0.26532559
Iteration 21, loss = 0.25097702
Iteration 22, loss = 0.25733390
Iteration 23, loss = 0.26658940
Iteration 24, loss = 0.28131771
Iteration 25, loss = 0.28630987
Iteration 26, loss = 0.28296370
Iteration 27, loss = 0.27840194
Iteration 28, loss = 0.27178366
Iteration 29, loss = 0.26399243
Iteration 30, loss = 0.25620262
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.63657960
Iteration 2, loss = 0.47673552
Iteration 3, loss = 0.41634045
Iteration 4, loss = 0.38278572
Iteration 5, loss = 0.34342058
Iteration 6, loss = 0.29141621
Iteration 7, loss = 0.26549438
Iteration 8, loss = 0.25433067
Iteration 9, loss = 0.25345944
Iteration 10, loss = 0.25968250
Iteration 11, loss = 0.24303108
Iteration 12, loss = 0.23228525
Iteration 13, loss = 0.24122378
Iteration 14, loss = 0.22679447
Iteration 15, loss = 0.22689305
Iteration 16, loss = 0.21535528
Iteration 17, loss = 0.19814227
Iteration 18, loss = 0.19965245
Iteration 19, loss = 0.21098249
Iteration 20, loss = 0.22648541
Iteration 21, loss = 0.21225010
Iteration 22, loss = 0.20239125
Iteration 23, loss = 0.20163689
Iteration 24, loss = 0.21336461
Iteration 25, loss = 0.20467712
Iteration 26, loss = 0.20325477
Iteration 27, loss = 0.20452500
Iteration 28, loss = 0.19122679
Iteration 29, loss = 0.18949102
Iteration 30, loss = 0.22984769
Iteration 31, loss = 0.23686869
Iteration 32, loss = 0.22917593
Iteration 33, loss = 0.22301725
Iteration 34, loss = 0.24009943
Iteration 35, loss = 0.24020660
Iteration 36, loss = 0.23571752
Iteration 37, loss = 0.22315351
Iteration 38, loss = 0.21680764
Iteration 39, loss = 0.21726039
Iteration 40, loss = 0.21105324
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.76700201
Iteration 2, loss = 0.57577115
Iteration 3, loss = 0.48423475
Iteration 4, loss = 0.42866496
Iteration 5, loss = 0.38652700
Iteration 6, loss = 0.34780963
Iteration 7, loss = 0.33955624
Iteration 8, loss = 0.37318952
Iteration 9, loss = 0.33337011
Iteration 10, loss = 0.34128108
Iteration 11, loss = 0.31830884
Iteration 12, loss = 0.29331246
Iteration 13, loss = 0.26513832
Iteration 14, loss = 0.26448457
Iteration 15, loss = 0.25128974
Iteration 16, loss = 0.26233400
Iteration 17, loss = 0.24246125
Iteration 18, loss = 0.25103978
Iteration 19, loss = 0.27284496
Iteration 20, loss = 0.28890089
Iteration 21, loss = 0.27154810
Iteration 22, loss = 0.25617829
Iteration 23, loss = 0.28340063
Iteration 24, loss = 0.26694653
Iteration 25, loss = 0.26187518
Iteration 26, loss = 0.27003281
Iteration 27, loss = 0.25289396
Iteration 28, loss = 0.25241481
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 1.20251446
Iteration 2, loss = 1.05937289
Iteration 3, loss = 0.98369329
Iteration 4, loss = 0.81482900
Iteration 5, loss = 0.79300900
Iteration 6, loss = 0.78122218
Iteration 7, loss = 0.76838981
Iteration 8, loss = 0.75727653
Iteration 9, loss = 0.74614016
Iteration 10, loss = 0.73763829
Iteration 11, loss = 0.72965487
Iteration 12, loss = 0.72198559
Iteration 13, loss = 0.71477717
Iteration 14, loss = 0.70783037
Iteration 15, loss = 0.69808569
Iteration 16, loss = 0.69005404
Iteration 17, loss = 0.68407274
Iteration 18, loss = 0.67846578
Iteration 19, loss = 0.67318651
Iteration 20, loss = 0.66819421
Iteration 21, loss = 0.66348036
Iteration 22, loss = 0.65901249
Iteration 23, loss = 0.65482133
Iteration 24, loss = 0.65080109
Iteration 25, loss = 0.64704237
Iteration 26, loss = 0.64343842
Iteration 27, loss = 0.64004745
Iteration 28, loss = 0.63684019
Iteration 29, loss = 0.63376445
Iteration 30, loss = 0.63086539
Iteration 31, loss = 0.62808941
Iteration 32, loss = 0.62544055
Iteration 33, loss = 0.62293141
Iteration 34, loss = 0.62054012
Iteration 35, loss = 0.61823861
Iteration 36, loss = 0.61605879
Iteration 37, loss = 0.61396653
Iteration 38, loss = 0.61196437
Iteration 39, loss = 0.61005048
Iteration 40, loss = 0.60821581
Iteration 41, loss = 0.60645523
Iteration 42, loss = 0.60478913
Iteration 43, loss = 0.60314628
Iteration 44, loss = 0.60160478
Iteration 45, loss = 0.60011065
Iteration 46, loss = 0.59867285
Iteration 47, loss = 0.59730597
Iteration 48, loss = 0.59597435
Iteration 49, loss = 0.59471002
Iteration 50, loss = 0.59347045
Iteration 51, loss = 0.59229773
Iteration 52, loss = 0.59116086
Iteration 53, loss = 0.59006762
Iteration 54, loss = 0.58901743
Iteration 55, loss = 0.58803244
Iteration 56, loss = 0.58703176
Iteration 57, loss = 0.58613062
Iteration 58, loss = 0.58521639
Iteration 59, loss = 0.58432810
Iteration 60, loss = 0.58350688
Iteration 61, loss = 0.58270160
Iteration 62, loss = 0.58191435
Iteration 63, loss = 0.58115847
Iteration 64, loss = 0.58044020
Iteration 65, loss = 0.57974550
Iteration 66, loss = 0.57908082
Iteration 67, loss = 0.57845221
Iteration 68, loss = 0.57782772
Iteration 69, loss = 0.57723791
Iteration 70, loss = 0.57664656
Iteration 71, loss = 0.57609708
Iteration 72, loss = 0.57557861
Iteration 73, loss = 0.57507381
Iteration 74, loss = 0.57304226
Iteration 75, loss = 0.46142500
Iteration 76, loss = 0.41326158
Iteration 77, loss = 0.39202250
Iteration 78, loss = 0.37520083
Iteration 79, loss = 0.36112398
Iteration 80, loss = 0.35014503
Iteration 81, loss = 0.34079673
Iteration 82, loss = 0.33393951
Iteration 83, loss = 0.32866533
Iteration 84, loss = 0.32418434
Iteration 85, loss = 0.32038560
Iteration 86, loss = 0.31711720
Iteration 87, loss = 0.31479589
Iteration 88, loss = 0.30243909
Iteration 89, loss = 0.29953177
Iteration 90, loss = 0.29718016
Iteration 91, loss = 0.29508643
Iteration 92, loss = 0.29327213
Iteration 93, loss = 0.29165010
Iteration 94, loss = 0.29021224
Iteration 95, loss = 0.28892690
Iteration 96, loss = 0.28777952
Iteration 97, loss = 0.28675320
Iteration 98, loss = 0.28583732
Iteration 99, loss = 0.28501174
Iteration 100, loss = 0.28428521
Iteration 101, loss = 0.28360680
Iteration 102, loss = 0.28300833
Iteration 103, loss = 0.28245549
Iteration 104, loss = 0.28196654
Iteration 105, loss = 0.28151522
Iteration 106, loss = 0.28111044
Iteration 107, loss = 0.28074878
Iteration 108, loss = 0.28040286
Iteration 109, loss = 0.28009717
Iteration 110, loss = 0.27981861
Iteration 111, loss = 0.27956615
Iteration 112, loss = 0.27933748
Iteration 113, loss = 0.27912720
Iteration 114, loss = 0.27820762
Iteration 115, loss = 0.29455615
Iteration 116, loss = 0.30208378
Iteration 117, loss = 0.30038674
Iteration 118, loss = 0.29955081
Iteration 119, loss = 0.29916919
Iteration 120, loss = 0.29893818
Iteration 121, loss = 0.29885133
Iteration 122, loss = 0.29877961
Iteration 123, loss = 0.29875338
Iteration 124, loss = 0.29874226
Iteration 125, loss = 0.29872088
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 1.49833306
Iteration 2, loss = 1.41024932
Iteration 3, loss = 1.34595072
Iteration 4, loss = 1.28432367
Iteration 5, loss = 1.22718773
Iteration 6, loss = 1.17311034
Iteration 7, loss = 1.12065515
Iteration 8, loss = 1.07287098
Iteration 9, loss = 1.02794358
Iteration 10, loss = 0.98486561
Iteration 11, loss = 0.94525576
Iteration 12, loss = 0.90963875
Iteration 13, loss = 0.87769915
Iteration 14, loss = 0.84873913
Iteration 15, loss = 0.82318940
Iteration 16, loss = 0.80100677
Iteration 17, loss = 0.78150053
Iteration 18, loss = 0.76642432
Iteration 19, loss = 0.75237784
Iteration 20, loss = 0.73945321
Iteration 21, loss = 0.72834600
Iteration 22, loss = 0.71884878
Iteration 23, loss = 0.71068552
Iteration 24, loss = 0.70366582
Iteration 25, loss = 0.69752639
Iteration 26, loss = 0.69226376
Iteration 27, loss = 0.68764189
Iteration 28, loss = 0.68341192
Iteration 29, loss = 0.67951992
Iteration 30, loss = 0.67605275
Iteration 31, loss = 0.67265043
Iteration 32, loss = 0.66937225
Iteration 33, loss = 0.66629366
Iteration 34, loss = 0.66338672
Iteration 35, loss = 0.66031238
Iteration 36, loss = 0.65781387
Iteration 37, loss = 0.65843092
Iteration 38, loss = 0.65661162
Iteration 39, loss = 0.65461332
Iteration 40, loss = 0.65225237
Iteration 41, loss = 0.65027161
Iteration 42, loss = 0.64840231
Iteration 43, loss = 0.64651400
Iteration 44, loss = 0.64460628
Iteration 45, loss = 0.64275461
Iteration 46, loss = 0.64094608
Iteration 47, loss = 0.63920443
Iteration 48, loss = 0.63736195
Iteration 49, loss = 0.63567851
Iteration 50, loss = 0.63400422
Iteration 51, loss = 0.63233068
Iteration 52, loss = 0.63073505
Iteration 53, loss = 0.62907021
Iteration 54, loss = 0.62073917
Iteration 55, loss = 0.59923152
Iteration 56, loss = 0.59435415
Iteration 57, loss = 0.59152104
Iteration 58, loss = 0.58938774
Iteration 59, loss = 0.58996928
Iteration 60, loss = 0.58762097
Iteration 61, loss = 0.58602882
Iteration 62, loss = 0.58461504
Iteration 63, loss = 0.58326246
Iteration 64, loss = 0.58200695
Iteration 65, loss = 0.58077697
Iteration 66, loss = 0.57962134
Iteration 67, loss = 0.57850341
Iteration 68, loss = 0.57742160
Iteration 69, loss = 0.57638921
Iteration 70, loss = 0.57540233
Iteration 71, loss = 0.57443840
Iteration 72, loss = 0.57306537
Iteration 73, loss = 0.56379358
Iteration 74, loss = 0.56192136
Iteration 75, loss = 0.56083901
Iteration 76, loss = 0.55983553
Iteration 77, loss = 0.55889227
Iteration 78, loss = 0.55798533
Iteration 79, loss = 0.55711231
Iteration 80, loss = 0.55629215
Iteration 81, loss = 0.55548617
Iteration 82, loss = 0.55473315
Iteration 83, loss = 0.55398467
Iteration 84, loss = 0.55328119
Iteration 85, loss = 0.55259969
Iteration 86, loss = 0.55194108
Iteration 87, loss = 0.55131819
Iteration 88, loss = 0.55082887
Iteration 89, loss = 0.54684463
Iteration 90, loss = 0.54534724
Iteration 91, loss = 0.54493961
Iteration 92, loss = 0.54457225
Iteration 93, loss = 0.54381139
Iteration 94, loss = 0.54335034
Iteration 95, loss = 0.54294151
Iteration 96, loss = 0.54251386
Iteration 97, loss = 0.54212239
Iteration 98, loss = 0.54174314
Iteration 99, loss = 0.54139476
Iteration 100, loss = 0.54106643
Iteration 101, loss = 0.54073948
Iteration 102, loss = 0.54043162
Iteration 103, loss = 0.54014030
Iteration 104, loss = 0.53990388
Iteration 105, loss = 0.53960805
Iteration 106, loss = 0.53936471
Iteration 107, loss = 0.53916367
Iteration 108, loss = 0.53891154
Iteration 109, loss = 0.53870304
Iteration 110, loss = 0.53673495
Iteration 111, loss = 0.53530134
Iteration 112, loss = 0.53464384
Iteration 113, loss = 0.53429150
Iteration 114, loss = 0.53399216
Iteration 115, loss = 0.53371758
Iteration 116, loss = 0.53348515
Iteration 117, loss = 0.53338231
Iteration 118, loss = 0.53317869
Iteration 119, loss = 0.53298558
Iteration 120, loss = 0.53280550
Iteration 121, loss = 0.53258809
Iteration 122, loss = 0.53243019
Iteration 123, loss = 0.53225307
Iteration 124, loss = 0.53209002
Iteration 125, loss = 0.53196756
Iteration 126, loss = 0.53179639
Iteration 127, loss = 0.53167294
Iteration 128, loss = 0.53711762
Iteration 129, loss = 0.54095768
Iteration 130, loss = 0.53549782
Iteration 131, loss = 0.53191795
Iteration 132, loss = 0.53166888
Iteration 133, loss = 0.53157760
Iteration 134, loss = 0.53157239
Iteration 135, loss = 0.53147742
Iteration 136, loss = 0.53142357
Iteration 137, loss = 0.53144700
Iteration 138, loss = 0.53138564
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 1.20091232
Iteration 2, loss = 1.15976225
Iteration 3, loss = 1.10241916
Iteration 4, loss = 1.04667479
Iteration 5, loss = 1.01088977
Iteration 6, loss = 0.97966815
Iteration 7, loss = 0.94971561
Iteration 8, loss = 0.92307289
Iteration 9, loss = 0.89717349
Iteration 10, loss = 0.87290605
Iteration 11, loss = 0.85003049
Iteration 12, loss = 0.82833581
Iteration 13, loss = 0.80801904
Iteration 14, loss = 0.78879205
Iteration 15, loss = 0.77052202
Iteration 16, loss = 0.75318603
Iteration 17, loss = 0.73680166
Iteration 18, loss = 0.72119159
Iteration 19, loss = 0.70635787
Iteration 20, loss = 0.69211520
Iteration 21, loss = 0.67851979
Iteration 22, loss = 0.66546741
Iteration 23, loss = 0.65295302
Iteration 24, loss = 0.64091365
Iteration 25, loss = 0.62934705
Iteration 26, loss = 0.61822512
Iteration 27, loss = 0.60752328
Iteration 28, loss = 0.59722093
Iteration 29, loss = 0.58730060
Iteration 30, loss = 0.57782548
Iteration 31, loss = 0.56907712
Iteration 32, loss = 0.56018806
Iteration 33, loss = 0.55127820
Iteration 34, loss = 0.54289069
Iteration 35, loss = 0.53500551
Iteration 36, loss = 0.52743865
Iteration 37, loss = 0.52018461
Iteration 38, loss = 0.51322708
Iteration 39, loss = 0.50654698
Iteration 40, loss = 0.50013020
Iteration 41, loss = 0.49397931
Iteration 42, loss = 0.48808384
Iteration 43, loss = 0.48241766
Iteration 44, loss = 0.47698843
Iteration 45, loss = 0.47178359
Iteration 46, loss = 0.46678994
Iteration 47, loss = 0.42470674
Iteration 48, loss = 0.40977952
Iteration 49, loss = 0.40038775
Iteration 50, loss = 0.39342782
Iteration 51, loss = 0.38435626
Iteration 52, loss = 0.37745592
Iteration 53, loss = 0.37159308
Iteration 54, loss = 0.35249274
Iteration 55, loss = 0.33979463
Iteration 56, loss = 0.33398288
Iteration 57, loss = 0.32845616
Iteration 58, loss = 0.32177966
Iteration 59, loss = 0.31591629
Iteration 60, loss = 0.31111722
Iteration 61, loss = 0.30410454
Iteration 62, loss = 0.29911079
Iteration 63, loss = 0.29502266
Iteration 64, loss = 0.29119846
Iteration 65, loss = 0.28756758
Iteration 66, loss = 0.28414366
Iteration 67, loss = 0.28091029
Iteration 68, loss = 0.27782783
Iteration 69, loss = 0.27492188
Iteration 70, loss = 0.27215722
Iteration 71, loss = 0.26953380
Iteration 72, loss = 0.26704162
Iteration 73, loss = 0.26467545
Iteration 74, loss = 0.26242185
Iteration 75, loss = 0.26026646
Iteration 76, loss = 0.25822832
Iteration 77, loss = 0.25628574
Iteration 78, loss = 0.25445131
Iteration 79, loss = 0.25267640
Iteration 80, loss = 0.25100857
Iteration 81, loss = 0.24941314
Iteration 82, loss = 0.24790034
Iteration 83, loss = 0.24645389
Iteration 84, loss = 0.24507296
Iteration 85, loss = 0.24377649
Iteration 86, loss = 0.24252957
Iteration 87, loss = 0.24133575
Iteration 88, loss = 0.24020609
Iteration 89, loss = 0.23911937
Iteration 90, loss = 0.23809296
Iteration 91, loss = 0.23711149
Iteration 92, loss = 0.23618886
Iteration 93, loss = 0.23529456
Iteration 94, loss = 0.23444738
Iteration 95, loss = 0.23364963
Iteration 96, loss = 0.23288311
Iteration 97, loss = 0.23218741
Iteration 98, loss = 0.23149192
Iteration 99, loss = 0.23082443
Iteration 100, loss = 0.23019881
Iteration 101, loss = 0.22960214
Iteration 102, loss = 0.22903641
Iteration 103, loss = 0.22848786
Iteration 104, loss = 0.22796619
Iteration 105, loss = 0.22747063
Iteration 106, loss = 0.22701768
Iteration 107, loss = 0.22655971
Iteration 108, loss = 0.22614415
Iteration 109, loss = 0.22573759
Iteration 110, loss = 0.22535620
Iteration 111, loss = 0.22500290
Iteration 112, loss = 0.22466769
Iteration 113, loss = 0.22433740
Iteration 114, loss = 0.22402846
Iteration 115, loss = 0.22373898
Iteration 116, loss = 0.22347173
Iteration 117, loss = 0.22320134
Iteration 118, loss = 0.22296111
Iteration 119, loss = 0.22271010
Iteration 120, loss = 0.22249846
Iteration 121, loss = 0.22228809
Iteration 122, loss = 0.22209709
Iteration 123, loss = 0.22190523
Iteration 124, loss = 0.22172448
Iteration 125, loss = 0.22158795
Iteration 126, loss = 0.22086083
Iteration 127, loss = 0.22104982
Iteration 128, loss = 0.22102426
Iteration 129, loss = 0.22097379
Iteration 130, loss = 0.22085391
Iteration 131, loss = 0.22076038
Iteration 132, loss = 0.22064791
Iteration 133, loss = 0.22054444
Iteration 134, loss = 0.22044198
Iteration 135, loss = 0.22036136
Iteration 136, loss = 0.22027537
Iteration 137, loss = 0.22020190
Iteration 138, loss = 0.22013075
Iteration 139, loss = 0.22007123
Iteration 140, loss = 0.22000682
Iteration 141, loss = 0.21994547
Iteration 142, loss = 0.21989655
Iteration 143, loss = 0.21983763
Iteration 144, loss = 0.21981115
Iteration 145, loss = 0.21975493
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 1.38406182
Iteration 2, loss = 1.35149688
Iteration 3, loss = 1.30761575
Iteration 4, loss = 1.24981317
Iteration 5, loss = 1.20652793
Iteration 6, loss = 1.16720524
Iteration 7, loss = 1.12236352
Iteration 8, loss = 1.07702816
Iteration 9, loss = 1.05316861
Iteration 10, loss = 1.04262642
Iteration 11, loss = 1.00380322
Iteration 12, loss = 0.96423008
Iteration 13, loss = 0.88997631
Iteration 14, loss = 0.86027588
Iteration 15, loss = 0.83541991
Iteration 16, loss = 0.81242870
Iteration 17, loss = 0.79167684
Iteration 18, loss = 0.77306003
Iteration 19, loss = 0.75652259
Iteration 20, loss = 0.74189629
Iteration 21, loss = 0.72904573
Iteration 22, loss = 0.71782270
Iteration 23, loss = 0.70772947
Iteration 24, loss = 0.69889332
Iteration 25, loss = 0.69128328
Iteration 26, loss = 0.68475739
Iteration 27, loss = 0.67911667
Iteration 28, loss = 0.67424279
Iteration 29, loss = 0.67000431
Iteration 30, loss = 0.66628777
Iteration 31, loss = 0.66301888
Iteration 32, loss = 0.66008747
Iteration 33, loss = 0.65743377
Iteration 34, loss = 0.65503321
Iteration 35, loss = 0.65282433
Iteration 36, loss = 0.65074755
Iteration 37, loss = 0.64878123
Iteration 38, loss = 0.64696887
Iteration 39, loss = 0.64522683
Iteration 40, loss = 0.64352385
Iteration 41, loss = 0.64188936
Iteration 42, loss = 0.64033095
Iteration 43, loss = 0.63882292
Iteration 44, loss = 0.63736408
Iteration 45, loss = 0.63593892
Iteration 46, loss = 0.63445112
Iteration 47, loss = 0.63305413
Iteration 48, loss = 0.63166843
Iteration 49, loss = 0.63032747
Iteration 50, loss = 0.62901002
Iteration 51, loss = 0.62777783
Iteration 52, loss = 0.62652761
Iteration 53, loss = 0.62529819
Iteration 54, loss = 0.62414692
Iteration 55, loss = 0.62302119
Iteration 56, loss = 0.62192968
Iteration 57, loss = 0.62081348
Iteration 58, loss = 0.61970384
Iteration 59, loss = 0.61844713
Iteration 60, loss = 0.61601174
Iteration 61, loss = 0.61477074
Iteration 62, loss = 0.61366055
Iteration 63, loss = 0.61271362
Iteration 64, loss = 0.61178496
Iteration 65, loss = 0.61090179
Iteration 66, loss = 0.61003484
Iteration 67, loss = 0.60917216
Iteration 68, loss = 0.60828979
Iteration 69, loss = 0.60749968
Iteration 70, loss = 0.60586235
Iteration 71, loss = 0.60097449
Iteration 72, loss = 0.59964707
Iteration 73, loss = 0.59882004
Iteration 74, loss = 0.59798361
Iteration 75, loss = 0.59724949
Iteration 76, loss = 0.59652612
Iteration 77, loss = 0.59576154
Iteration 78, loss = 0.59502590
Iteration 79, loss = 0.59440972
Iteration 80, loss = 0.59337459
Iteration 81, loss = 0.59245146
Iteration 82, loss = 0.59187612
Iteration 83, loss = 0.59098337
Iteration 84, loss = 0.59010686
Iteration 85, loss = 0.58950499
Iteration 86, loss = 0.58889813
Iteration 87, loss = 0.58279342
Iteration 88, loss = 0.56439694
Iteration 89, loss = 0.56307601
Iteration 90, loss = 0.56225654
Iteration 91, loss = 0.56156169
Iteration 92, loss = 0.56093219
Iteration 93, loss = 0.56033463
Iteration 94, loss = 0.55977297
Iteration 95, loss = 0.55924457
Iteration 96, loss = 0.55874973
Iteration 97, loss = 0.55826322
Iteration 98, loss = 0.55782125
Iteration 99, loss = 0.55738772
Iteration 100, loss = 0.55696984
Iteration 101, loss = 0.55651440
Iteration 102, loss = 0.55583356
Iteration 103, loss = 0.55545511
Iteration 104, loss = 0.55508962
Iteration 105, loss = 0.55475918
Iteration 106, loss = 0.55448029
Iteration 107, loss = 0.55416588
Iteration 108, loss = 0.55386672
Iteration 109, loss = 0.55359159
Iteration 110, loss = 0.55331254
Iteration 111, loss = 0.55307022
Iteration 112, loss = 0.55280359
Iteration 113, loss = 0.55257884
Iteration 114, loss = 0.55233578
Iteration 115, loss = 0.55211885
Iteration 116, loss = 0.55190385
Iteration 117, loss = 0.55169789
Iteration 118, loss = 0.55150876
Iteration 119, loss = 0.55134595
Iteration 120, loss = 0.55114220
Iteration 121, loss = 0.55097889
Iteration 122, loss = 0.55079810
Iteration 123, loss = 0.55066434
Iteration 124, loss = 0.55048225
Iteration 125, loss = 0.55034501
Iteration 126, loss = 0.55018548
Iteration 127, loss = 0.55006331
Iteration 128, loss = 0.54995066
Iteration 129, loss = 0.54981629
Iteration 130, loss = 0.54969928
Iteration 131, loss = 0.54959687
Iteration 132, loss = 0.54946815
Iteration 133, loss = 0.54933956
Iteration 134, loss = 0.54922856
Iteration 135, loss = 0.54913586
Iteration 136, loss = 0.54908196
Iteration 137, loss = 0.54893306
Iteration 138, loss = 0.54886067
Iteration 139, loss = 0.54876454
Iteration 140, loss = 0.54869319
Iteration 141, loss = 0.54858010
Iteration 142, loss = 0.54852737
Iteration 143, loss = 0.54843215
Iteration 144, loss = 0.54835217
Iteration 145, loss = 0.54784409
Iteration 146, loss = 0.54768726
Iteration 147, loss = 0.54716844
Iteration 148, loss = 0.54687940
Iteration 149, loss = 0.54683552
Iteration 150, loss = 0.54675178
Iteration 151, loss = 0.54668529
Iteration 152, loss = 0.54663002
Iteration 153, loss = 0.54657132
Iteration 154, loss = 0.54651557
Iteration 155, loss = 0.54650234
Iteration 156, loss = 0.54647416
Iteration 157, loss = 0.54636594
Iteration 158, loss = 0.54631674
Iteration 159, loss = 0.54626278
Iteration 160, loss = 0.54624167
Iteration 161, loss = 0.54619075
Iteration 162, loss = 0.54613385
Iteration 163, loss = 0.54612983
Iteration 164, loss = 0.54605904
Iteration 165, loss = 0.54602299
Iteration 166, loss = 0.54596153
Iteration 167, loss = 0.54567875
Iteration 168, loss = 0.54359126
Iteration 169, loss = 0.50559404
Iteration 170, loss = 0.47567480
Iteration 171, loss = 0.46350342
Iteration 172, loss = 0.45501000
Iteration 173, loss = 0.44864212
Iteration 174, loss = 0.44371763
Iteration 175, loss = 0.40299724
Iteration 176, loss = 0.37524372
Iteration 177, loss = 0.37053954
Iteration 178, loss = 0.36747011
Iteration 179, loss = 0.36502915
Iteration 180, loss = 0.36298745
Iteration 181, loss = 0.36120986
Iteration 182, loss = 0.35965843
Iteration 183, loss = 0.35826612
Iteration 184, loss = 0.35701708
Iteration 185, loss = 0.35591301
Iteration 186, loss = 0.35489901
Iteration 187, loss = 0.35400208
Iteration 188, loss = 0.35316006
Iteration 189, loss = 0.35238014
Iteration 190, loss = 0.35167341
Iteration 191, loss = 0.35101016
Iteration 192, loss = 0.35039371
Iteration 193, loss = 0.34982025
Iteration 194, loss = 0.34930715
Iteration 195, loss = 0.34878643
Iteration 196, loss = 0.34831502
Iteration 197, loss = 0.34789641
Iteration 198, loss = 0.34745615
Iteration 199, loss = 0.34707207
Iteration 200, loss = 0.34610925
Iteration 201, loss = 0.34524526
Iteration 202, loss = 0.34494103
Iteration 203, loss = 0.32403490
Iteration 204, loss = 0.28490872
Iteration 205, loss = 0.28161647
Iteration 206, loss = 0.28049914
Iteration 207, loss = 0.27984103
Iteration 208, loss = 0.27934763
Iteration 209, loss = 0.27893906
Iteration 210, loss = 0.27860814
Iteration 211, loss = 0.27824912
Iteration 212, loss = 0.27796635
Iteration 213, loss = 0.27763876
Iteration 214, loss = 0.27736216
Iteration 215, loss = 0.27711092
Iteration 216, loss = 0.27686029
Iteration 217, loss = 0.27661668
Iteration 218, loss = 0.27640630
Iteration 219, loss = 0.27619487
Iteration 220, loss = 0.27601294
Iteration 221, loss = 0.27580189
Iteration 222, loss = 0.27563468
Iteration 223, loss = 0.27545814
Iteration 224, loss = 0.27529918
Iteration 225, loss = 0.27514245
Iteration 226, loss = 0.27499408
Iteration 227, loss = 0.27485912
Iteration 228, loss = 0.27472075
Iteration 229, loss = 0.27460124
Iteration 230, loss = 0.27447207
Iteration 231, loss = 0.27435620
Iteration 232, loss = 0.27426015
Iteration 233, loss = 0.27414080
Iteration 234, loss = 0.27404340
Iteration 235, loss = 0.27396573
Iteration 236, loss = 0.27385618
Iteration 237, loss = 0.27376504
Iteration 238, loss = 0.27369366
Iteration 239, loss = 0.27361912
Iteration 240, loss = 0.27352707
Iteration 241, loss = 0.27346941
Iteration 242, loss = 0.27338785
Iteration 243, loss = 0.27332248
Iteration 244, loss = 0.27325839
Iteration 245, loss = 0.27320749
Iteration 246, loss = 0.27314217
Iteration 247, loss = 0.27309277
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 1.35888909
Iteration 2, loss = 1.31332493
Iteration 3, loss = 1.26853081
Iteration 4, loss = 1.22559938
Iteration 5, loss = 1.18378914
Iteration 6, loss = 1.14365875
Iteration 7, loss = 1.10513933
Iteration 8, loss = 1.06834925
Iteration 9, loss = 1.03320500
Iteration 10, loss = 0.99967453
Iteration 11, loss = 0.96825867
Iteration 12, loss = 0.93829397
Iteration 13, loss = 0.91016182
Iteration 14, loss = 0.88403539
Iteration 15, loss = 0.85961967
Iteration 16, loss = 0.83711658
Iteration 17, loss = 0.81639110
Iteration 18, loss = 0.79841182
Iteration 19, loss = 0.78004877
Iteration 20, loss = 0.76442565
Iteration 21, loss = 0.75037987
Iteration 22, loss = 0.73783099
Iteration 23, loss = 0.72662665
Iteration 24, loss = 0.71672943
Iteration 25, loss = 0.70817381
Iteration 26, loss = 0.70042167
Iteration 27, loss = 0.69379954
Iteration 28, loss = 0.68798605
Iteration 29, loss = 0.68300329
Iteration 30, loss = 0.67873097
Iteration 31, loss = 0.67507233
Iteration 32, loss = 0.67194672
Iteration 33, loss = 0.66932228
Iteration 34, loss = 0.66711938
Iteration 35, loss = 0.66532807
Iteration 36, loss = 0.66371542
Iteration 37, loss = 0.66242656
Iteration 38, loss = 0.66139037
Iteration 39, loss = 0.66055354
Iteration 40, loss = 0.65981615
Iteration 41, loss = 0.65924508
Iteration 42, loss = 0.65877982
Iteration 43, loss = 0.65840933
Iteration 44, loss = 0.65811741
Iteration 45, loss = 0.65790879
Iteration 46, loss = 0.65770501
Iteration 47, loss = 0.65757385
Iteration 48, loss = 0.65746128
Iteration 49, loss = 0.65738314
Iteration 50, loss = 0.65730913
Iteration 51, loss = 0.65725808
Iteration 52, loss = 0.65722201
Iteration 53, loss = 0.65718839
Iteration 54, loss = 0.65717356
Iteration 55, loss = 0.65716687
Iteration 56, loss = 0.65715082
Iteration 57, loss = 0.65713054
Iteration 58, loss = 0.65706720
Iteration 59, loss = 0.65708521
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.82351051
Iteration 2, loss = 0.48102143
Iteration 3, loss = 0.39208994
Iteration 4, loss = 0.34395477
Iteration 5, loss = 0.31845906
Iteration 6, loss = 0.31396574
Iteration 7, loss = 0.28362123
Iteration 8, loss = 0.26670721
Iteration 9, loss = 0.25336692
Iteration 10, loss = 0.25279852
Iteration 11, loss = 0.23579199
Iteration 12, loss = 0.23686858
Iteration 13, loss = 0.23089829
Iteration 14, loss = 0.22802299
Iteration 15, loss = 0.22787323
Iteration 16, loss = 0.22110247
Iteration 17, loss = 0.21912942
Iteration 18, loss = 0.21178390
Iteration 19, loss = 0.20186717
Iteration 20, loss = 0.20750730
Iteration 21, loss = 0.20545901
Iteration 22, loss = 0.20351956
Iteration 23, loss = 0.22158277
Iteration 24, loss = 0.21989297
Iteration 25, loss = 0.22433411
Iteration 26, loss = 0.22789662
Iteration 27, loss = 0.22308784
Iteration 28, loss = 0.22763303
Iteration 29, loss = 0.22359532
Iteration 30, loss = 0.22096110
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.72053364
Iteration 2, loss = 0.49725809
Iteration 3, loss = 0.37991066
Iteration 4, loss = 0.33262022
Iteration 5, loss = 0.29624538
Iteration 6, loss = 0.24866818
Iteration 7, loss = 0.23020259
Iteration 8, loss = 0.20137948
Iteration 9, loss = 0.19787795
Iteration 10, loss = 0.20844512
Iteration 11, loss = 0.20771262
Iteration 12, loss = 0.21591255
Iteration 13, loss = 0.19689024
Iteration 14, loss = 0.18433101
Iteration 15, loss = 0.18245204
Iteration 16, loss = 0.18997980
Iteration 17, loss = 0.18221949
Iteration 18, loss = 0.17486260
Iteration 19, loss = 0.16799862
Iteration 20, loss = 0.16424136
Iteration 21, loss = 0.16382188
Iteration 22, loss = 0.15567943
Iteration 23, loss = 0.14790300
Iteration 24, loss = 0.14664867
Iteration 25, loss = 0.14829708
Iteration 26, loss = 0.14392454
Iteration 27, loss = 0.14098049
Iteration 28, loss = 0.14174079
Iteration 29, loss = 0.13992585
Iteration 30, loss = 0.13873353
Iteration 31, loss = 0.13843835
Iteration 32, loss = 0.13946778
Iteration 33, loss = 0.13811689
Iteration 34, loss = 0.13410555
Iteration 35, loss = 0.13432732
Iteration 36, loss = 0.13180210
Iteration 37, loss = 0.12687032
Iteration 38, loss = 0.12484357
Iteration 39, loss = 0.12185766
Iteration 40, loss = 0.12064860
Iteration 41, loss = 0.12427004
Iteration 42, loss = 0.11987814
Iteration 43, loss = 0.11738654
Iteration 44, loss = 0.11457692
Iteration 45, loss = 0.11628452
Iteration 46, loss = 0.11518186
Iteration 47, loss = 0.13146081
Iteration 48, loss = 0.13668966
Iteration 49, loss = 0.13607557
Iteration 50, loss = 0.13414888
Iteration 51, loss = 0.13486344
Iteration 52, loss = 0.13360141
Iteration 53, loss = 0.13267478
Iteration 54, loss = 0.13188830
Iteration 55, loss = 0.13129142
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.53801529
Iteration 2, loss = 0.42995067
Iteration 3, loss = 0.33843602
Iteration 4, loss = 0.30261260
Iteration 5, loss = 0.28753665
Iteration 6, loss = 0.27529927
Iteration 7, loss = 0.27185480
Iteration 8, loss = 0.26285842
Iteration 9, loss = 0.24306651
Iteration 10, loss = 0.23204478
Iteration 11, loss = 0.23074924
Iteration 12, loss = 0.21878618
Iteration 13, loss = 0.21488891
Iteration 14, loss = 0.20783717
Iteration 15, loss = 0.19617532
Iteration 16, loss = 0.18495315
Iteration 17, loss = 0.18161976
Iteration 18, loss = 0.18230143
Iteration 19, loss = 0.17722573
Iteration 20, loss = 0.17433879
Iteration 21, loss = 0.17153242
Iteration 22, loss = 0.16993794
Iteration 23, loss = 0.16659190
Iteration 24, loss = 0.16724169
Iteration 25, loss = 0.16701594
Iteration 26, loss = 0.16549725
Iteration 27, loss = 0.16420647
Iteration 28, loss = 0.16249233
Iteration 29, loss = 0.16129071
Iteration 30, loss = 0.16040481
Iteration 31, loss = 0.15961671
Iteration 32, loss = 0.15745873
Iteration 33, loss = 0.15535956
Iteration 34, loss = 0.15400352
Iteration 35, loss = 0.15653201
Iteration 36, loss = 0.15724409
Iteration 37, loss = 0.15656334
Iteration 38, loss = 0.15674279
Iteration 39, loss = 0.15556871
Iteration 40, loss = 0.15441214
Iteration 41, loss = 0.15345030
Iteration 42, loss = 0.15285901
Iteration 43, loss = 0.15205522
Iteration 44, loss = 0.15117171
Iteration 45, loss = 0.15034989
Iteration 46, loss = 0.14967780
Iteration 47, loss = 0.15104670
Iteration 48, loss = 0.15041821
Iteration 49, loss = 0.14989936
Iteration 50, loss = 0.14918099
Iteration 51, loss = 0.14962480
Iteration 52, loss = 0.14933802
Iteration 53, loss = 0.14760149
Iteration 54, loss = 0.14688399
Iteration 55, loss = 0.16469497
Iteration 56, loss = 0.15707580
Iteration 57, loss = 0.16436062
Iteration 58, loss = 0.16790567
Iteration 59, loss = 0.16372215
Iteration 60, loss = 0.16141373
Iteration 61, loss = 0.16092480
Iteration 62, loss = 0.15993173
Iteration 63, loss = 0.15927927
Iteration 64, loss = 0.15850530
Iteration 65, loss = 0.15783543
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.62451615
Iteration 2, loss = 0.43303442
Iteration 3, loss = 0.36001983
Iteration 4, loss = 0.30024084
Iteration 5, loss = 0.27402779
Iteration 6, loss = 0.23782871
Iteration 7, loss = 0.23551407
Iteration 8, loss = 0.21917728
Iteration 9, loss = 0.21834339
Iteration 10, loss = 0.20822474
Iteration 11, loss = 0.20854156
Iteration 12, loss = 0.23133703
Iteration 13, loss = 0.20637511
Iteration 14, loss = 0.20759941
Iteration 15, loss = 0.20389823
Iteration 16, loss = 0.19386267
Iteration 17, loss = 0.18605725
Iteration 18, loss = 0.17704447
Iteration 19, loss = 0.17896014
Iteration 20, loss = 0.18729360
Iteration 21, loss = 0.20151032
Iteration 22, loss = 0.19577651
Iteration 23, loss = 0.20090219
Iteration 24, loss = 0.19482063
Iteration 25, loss = 0.18894956
Iteration 26, loss = 0.18370061
Iteration 27, loss = 0.18576325
Iteration 28, loss = 0.19133103
Iteration 29, loss = 0.18928004
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.59586771
Iteration 2, loss = 0.41255128
Iteration 3, loss = 0.35735138
Iteration 4, loss = 0.29969861
Iteration 5, loss = 0.28751635
Iteration 6, loss = 0.29282499
Iteration 7, loss = 0.26611294
Iteration 8, loss = 0.25553499
Iteration 9, loss = 0.25971364
Iteration 10, loss = 0.24295097
Iteration 11, loss = 0.22468973
Iteration 12, loss = 0.21923823
Iteration 13, loss = 0.20820862
Iteration 14, loss = 0.21341809
Iteration 15, loss = 0.22023131
Iteration 16, loss = 0.20542956
Iteration 17, loss = 0.21025557
Iteration 18, loss = 0.19584750
Iteration 19, loss = 0.19289294
Iteration 20, loss = 0.20997012
Iteration 21, loss = 0.21050673
Iteration 22, loss = 0.21156434
Iteration 23, loss = 0.20246325
Iteration 24, loss = 0.20032294
Iteration 25, loss = 0.19566380
Iteration 26, loss = 0.18807751
Iteration 27, loss = 0.18479644
Iteration 28, loss = 0.18242214
Iteration 29, loss = 0.18044234
Iteration 30, loss = 0.18087253
Iteration 31, loss = 0.18143906
Iteration 32, loss = 0.17673394
Iteration 33, loss = 0.17559812
Iteration 34, loss = 0.17663954
Iteration 35, loss = 0.17564359
Iteration 36, loss = 0.17484760
Iteration 37, loss = 0.17518837
Iteration 38, loss = 0.17404960
Iteration 39, loss = 0.17287405
Iteration 40, loss = 0.18227296
Iteration 41, loss = 0.18599541
Iteration 42, loss = 0.18450001
Iteration 43, loss = 0.18378848
Iteration 44, loss = 0.18507624
Iteration 45, loss = 0.18301413
Iteration 46, loss = 0.18223272
Iteration 47, loss = 0.18148551
Iteration 48, loss = 0.18056333
Iteration 49, loss = 0.17994286
Iteration 50, loss = 0.17925425
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.74304294
Iteration 2, loss = 0.72689829
Iteration 3, loss = 0.71395446
Iteration 4, loss = 0.70373628
Iteration 5, loss = 0.69500821
Iteration 6, loss = 0.68801828
Iteration 7, loss = 0.68235060
Iteration 8, loss = 0.67782873
Iteration 9, loss = 0.67420977
Iteration 10, loss = 0.67134336
Iteration 11, loss = 0.66899943
Iteration 12, loss = 0.66720120
Iteration 13, loss = 0.66593415
Iteration 14, loss = 0.66425874
Iteration 15, loss = 0.66327668
Iteration 16, loss = 0.66245694
Iteration 17, loss = 0.66177784
Iteration 18, loss = 0.66119964
Iteration 19, loss = 0.66073337
Iteration 20, loss = 0.66033376
Iteration 21, loss = 0.66001120
Iteration 22, loss = 0.65974555
Iteration 23, loss = 0.65951192
Iteration 24, loss = 0.65932124
Iteration 25, loss = 0.65916609
Iteration 26, loss = 0.65903519
Iteration 27, loss = 0.65893664
Iteration 28, loss = 0.65884809
Iteration 29, loss = 0.65878707
Iteration 30, loss = 0.65872773
Iteration 31, loss = 0.65868282
Iteration 32, loss = 0.66061173
Iteration 33, loss = 0.66207406
Iteration 34, loss = 0.66208752
Iteration 35, loss = 0.66208471
Iteration 36, loss = 0.66206938
Iteration 37, loss = 0.66206365
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.87957611
Iteration 2, loss = 0.83620179
Iteration 3, loss = 0.81473463
Iteration 4, loss = 0.78679737
Iteration 5, loss = 0.78300011
Iteration 6, loss = 0.77719812
Iteration 7, loss = 0.75953502
Iteration 8, loss = 0.74632503
Iteration 9, loss = 0.73648716
Iteration 10, loss = 0.70324922
Iteration 11, loss = 0.66870674
Iteration 12, loss = 0.65683818
Iteration 13, loss = 0.64022061
Iteration 14, loss = 0.62315178
Iteration 15, loss = 0.64514330
Iteration 16, loss = 0.64386294
Iteration 17, loss = 0.63788460
Iteration 18, loss = 0.63074247
Iteration 19, loss = 0.62381571
Iteration 20, loss = 0.61746870
Iteration 21, loss = 0.60483342
Iteration 22, loss = 0.58424695
Iteration 23, loss = 0.57268956
Iteration 24, loss = 0.56343055
Iteration 25, loss = 0.55820177
Iteration 26, loss = 0.53212062
Iteration 27, loss = 0.51937871
Iteration 28, loss = 0.48988246
Iteration 29, loss = 0.48118722
Iteration 30, loss = 0.49632897
Iteration 31, loss = 0.48815170
Iteration 32, loss = 0.48052378
Iteration 33, loss = 0.47379358
Iteration 34, loss = 0.46704013
Iteration 35, loss = 0.46082800
Iteration 36, loss = 0.45475122
Iteration 37, loss = 0.44916216
Iteration 38, loss = 0.44281789
Iteration 39, loss = 0.43049299
Iteration 40, loss = 0.40672201
Iteration 41, loss = 0.41003445
Iteration 42, loss = 0.42823512
Iteration 43, loss = 0.42620736
Iteration 44, loss = 0.42145829
Iteration 45, loss = 0.41732586
Iteration 46, loss = 0.41521525
Iteration 47, loss = 0.43049604
Iteration 48, loss = 0.41775680
Iteration 49, loss = 0.40456706
Iteration 50, loss = 0.40091728
Iteration 51, loss = 0.39748025
Iteration 52, loss = 0.39577067
Iteration 53, loss = 0.39126603
Iteration 54, loss = 0.38378325
Iteration 55, loss = 0.38156765
Iteration 56, loss = 0.37871212
Iteration 57, loss = 0.37231579
Iteration 58, loss = 0.36843575
Iteration 59, loss = 0.36255235
Iteration 60, loss = 0.36078462
Iteration 61, loss = 0.35880975
Iteration 62, loss = 0.36325025
Iteration 63, loss = 0.36184527
Iteration 64, loss = 0.35967687
Iteration 65, loss = 0.35762610
Iteration 66, loss = 0.35563684
Iteration 67, loss = 0.35364667
Iteration 68, loss = 0.35179676
Iteration 69, loss = 0.34999897
Iteration 70, loss = 0.34827653
Iteration 71, loss = 0.34659943
Iteration 72, loss = 0.34499071
Iteration 73, loss = 0.34342720
Iteration 74, loss = 0.34191538
Iteration 75, loss = 0.34047135
Iteration 76, loss = 0.33905855
Iteration 77, loss = 0.33769603
Iteration 78, loss = 0.33638322
Iteration 79, loss = 0.33510938
Iteration 80, loss = 0.33388920
Iteration 81, loss = 0.33269771
Iteration 82, loss = 0.33156303
Iteration 83, loss = 0.33046202
Iteration 84, loss = 0.32938821
Iteration 85, loss = 0.32834886
Iteration 86, loss = 0.32734410
Iteration 87, loss = 0.32638021
Iteration 88, loss = 0.32544544
Iteration 89, loss = 0.32455037
Iteration 90, loss = 0.32368671
Iteration 91, loss = 0.32285283
Iteration 92, loss = 0.32203848
Iteration 93, loss = 0.32124859
Iteration 94, loss = 0.32026425
Iteration 95, loss = 0.31886292
Iteration 96, loss = 0.31544253
Iteration 97, loss = 0.28330732
Iteration 98, loss = 0.30329666
Iteration 99, loss = 0.61086348
Iteration 100, loss = 0.56173045
Iteration 101, loss = 0.53461786
Iteration 102, loss = 0.51791750
Iteration 103, loss = 0.50800765
Iteration 104, loss = 0.50247343
Iteration 105, loss = 0.49875712
Iteration 106, loss = 0.49671181
Iteration 107, loss = 0.49536870
Iteration 108, loss = 0.49479342
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.73641303
Iteration 2, loss = 0.71841879
Iteration 3, loss = 0.69857072
Iteration 4, loss = 0.68749328
Iteration 5, loss = 0.67793448
Iteration 6, loss = 0.66979687
Iteration 7, loss = 0.66171504
Iteration 8, loss = 0.65697604
Iteration 9, loss = 0.65322040
Iteration 10, loss = 0.65002091
Iteration 11, loss = 0.64604223
Iteration 12, loss = 0.64029371
Iteration 13, loss = 0.63739597
Iteration 14, loss = 0.63420955
Iteration 15, loss = 0.63183734
Iteration 16, loss = 0.62970076
Iteration 17, loss = 0.62771318
Iteration 18, loss = 0.62681483
Iteration 19, loss = 0.62554171
Iteration 20, loss = 0.62398934
Iteration 21, loss = 0.62362084
Iteration 22, loss = 0.62083647
Iteration 23, loss = 0.61956272
Iteration 24, loss = 0.61840100
Iteration 25, loss = 0.61729879
Iteration 26, loss = 0.61760566
Iteration 27, loss = 0.61828618
Iteration 28, loss = 0.61744240
Iteration 29, loss = 0.61659095
Iteration 30, loss = 0.61583410
Iteration 31, loss = 0.61509351
Iteration 32, loss = 0.61439928
Iteration 33, loss = 0.61377274
Iteration 34, loss = 0.61208339
Iteration 35, loss = 0.60911520
Iteration 36, loss = 0.60724539
Iteration 37, loss = 0.60634072
Iteration 38, loss = 0.60374614
Iteration 39, loss = 0.60321444
Iteration 40, loss = 0.60198261
Iteration 41, loss = 0.60082904
Iteration 42, loss = 0.60021341
Iteration 43, loss = 0.59984015
Iteration 44, loss = 0.71663891
Iteration 45, loss = 0.79888237
Iteration 46, loss = 0.77661439
Iteration 47, loss = 0.76522262
Iteration 48, loss = 0.75738991
Iteration 49, loss = 0.75101870
Iteration 50, loss = 0.74544545
Iteration 51, loss = 0.73497128
Iteration 52, loss = 0.72503600
Iteration 53, loss = 0.72184438
Iteration 54, loss = 0.71910320
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.71717949
Iteration 2, loss = 0.71084835
Iteration 3, loss = 0.70718372
Iteration 4, loss = 0.70330035
Iteration 5, loss = 0.70081459
Iteration 6, loss = 0.69859674
Iteration 7, loss = 0.69662642
Iteration 8, loss = 0.69482147
Iteration 9, loss = 0.69323762
Iteration 10, loss = 0.69178658
Iteration 11, loss = 0.69050409
Iteration 12, loss = 0.68935520
Iteration 13, loss = 0.68834852
Iteration 14, loss = 0.68742870
Iteration 15, loss = 0.68663051
Iteration 16, loss = 0.68586371
Iteration 17, loss = 0.68517102
Iteration 18, loss = 0.68457404
Iteration 19, loss = 0.68399962
Iteration 20, loss = 0.68336040
Iteration 21, loss = 0.68137096
Iteration 22, loss = 0.67484616
Iteration 23, loss = 0.67615813
Iteration 24, loss = 0.68258685
Iteration 25, loss = 0.68237661
Iteration 26, loss = 0.68219923
Iteration 27, loss = 0.68204894
Iteration 28, loss = 0.68193165
Iteration 29, loss = 0.68182961
Iteration 30, loss = 0.68172916
Iteration 31, loss = 0.68165332
Iteration 32, loss = 0.68159442
Iteration 33, loss = 0.68152933
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.75042687
Iteration 2, loss = 0.73522192
Iteration 3, loss = 0.72197990
Iteration 4, loss = 0.70938521
Iteration 5, loss = 0.69926090
Iteration 6, loss = 0.68990306
Iteration 7, loss = 0.65957228
Iteration 8, loss = 0.63768446
Iteration 9, loss = 0.62694322
Iteration 10, loss = 0.61753677
Iteration 11, loss = 0.60679234
Iteration 12, loss = 0.60121000
Iteration 13, loss = 0.59753296
Iteration 14, loss = 0.59241991
Iteration 15, loss = 0.58685066
Iteration 16, loss = 0.58190328
Iteration 17, loss = 0.57700526
Iteration 18, loss = 0.57262636
Iteration 19, loss = 0.56916360
Iteration 20, loss = 0.56609569
Iteration 21, loss = 0.56601908
Iteration 22, loss = 0.60284568
Iteration 23, loss = 0.59144890
Iteration 24, loss = 0.58142084
Iteration 25, loss = 0.56979357
Iteration 26, loss = 0.54904734
Iteration 27, loss = 0.54608390
Iteration 28, loss = 0.54356889
Iteration 29, loss = 0.54134586
Iteration 30, loss = 0.53932849
Iteration 31, loss = 0.53751101
Iteration 32, loss = 0.53583268
Iteration 33, loss = 0.53428725
Iteration 34, loss = 0.53283419
Iteration 35, loss = 0.53147422
Iteration 36, loss = 0.53062842
Iteration 37, loss = 0.52933282
Iteration 38, loss = 0.52815751
Iteration 39, loss = 0.52703785
Iteration 40, loss = 0.52598106
Iteration 41, loss = 0.52495136
Iteration 42, loss = 0.52398381
Iteration 43, loss = 0.52304980
Iteration 44, loss = 0.52215157
Iteration 45, loss = 0.52129572
Iteration 46, loss = 0.52040872
Iteration 47, loss = 0.51961303
Iteration 48, loss = 0.51885964
Iteration 49, loss = 0.51815839
Iteration 50, loss = 0.51746054
Iteration 51, loss = 0.51680537
Iteration 52, loss = 0.51617305
Iteration 53, loss = 0.51557411
Iteration 54, loss = 0.51501260
Iteration 55, loss = 0.51446351
Iteration 56, loss = 0.51393763
Iteration 57, loss = 0.51342514
Iteration 58, loss = 0.51295031
Iteration 59, loss = 0.51248958
Iteration 60, loss = 0.50917924
Iteration 61, loss = 0.57808159
Iteration 62, loss = 0.57402754
Iteration 63, loss = 0.55337038
Iteration 64, loss = 0.55090398
Iteration 65, loss = 0.54301174
Iteration 66, loss = 0.52384839
Iteration 67, loss = 0.52813889
Iteration 68, loss = 0.52423133
Iteration 69, loss = 0.52227035
Iteration 70, loss = 0.52122791
Iteration 71, loss = 0.52063636
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Iteration 1, loss = 0.27857616
Iteration 2, loss = 0.17093527
Iteration 3, loss = 0.14784237
Iteration 4, loss = 0.16050515
Iteration 5, loss = 0.16518912
Iteration 6, loss = 0.15128325
Iteration 7, loss = 0.16126614
Iteration 8, loss = 0.16112385
Iteration 9, loss = 0.17054643
Iteration 10, loss = 0.17235235
Iteration 11, loss = 0.16996479
Iteration 12, loss = 0.16857094
Iteration 13, loss = 0.18030607
Iteration 14, loss = 0.17311690
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Out[38]:
RandomizedSearchCV(cv=5, estimator=MLPClassifier(), n_iter=50,
                   param_distributions={'activation': ['relu', 'tanh',
                                                       'logistic'],
                                        'alpha': [0.0001, 0.001, 0.01, 0.1],
                                        'hidden_layer_sizes': [(2,), (10,),
                                                               (50,), (100,),
                                                               (50, 50),
                                                               (100, 100)],
                                        'max_iter': [1000],
                                        'momentum': [0.1, 0.2, 0.3, 0.4, 0.5,
                                                     0.6, 0.7, 0.8, 0.9],
                                        'solver': ['adam'], 'verbose': [1]},
                   random_state=42)
In [39]:
t2=time.time() 
print("Training Time:",t2-t1)
Training Time: 774.8519990444183
In [40]:
#import pickle
#with open('grid_search.pkl', 'wb') as file:
    #pickle.dump(model, file)
In [41]:
permutations=pd.DataFrame(model.cv_results_)
permutations
Out[41]:
mean_fit_time std_fit_time mean_score_time std_score_time param_verbose param_solver param_momentum param_max_iter param_hidden_layer_sizes param_alpha param_activation params split0_test_score split1_test_score split2_test_score split3_test_score split4_test_score mean_test_score std_test_score rank_test_score
0 2.725081 0.842791 0.010158 0.003979 1 adam 0.6 1000 (50, 50) 0.1 logistic {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.939117 0.913242 0.916667 0.908676 0.904490 0.916438 0.012063 23
1 1.067262 0.386352 0.003125 0.006249 1 adam 0.5 1000 (2,) 0.0001 tanh {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.735921 0.621766 0.644597 0.728691 0.748858 0.695967 0.052172 43
2 4.108228 0.812195 0.025349 0.004190 1 adam 0.9 1000 (100, 100) 0.01 logistic {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.921994 0.936073 0.924277 0.938737 0.929604 0.930137 0.006483 9
3 2.803752 0.420413 0.021880 0.007655 1 adam 0.7 1000 (100, 100) 0.1 tanh {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.960426 0.929224 0.949772 0.953196 0.940639 0.946651 0.010791 2
4 1.443288 0.584920 0.006417 0.005852 1 adam 0.1 1000 (50,) 0.001 relu {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.939498 0.866058 0.925799 0.901446 0.926941 0.911948 0.026045 27
5 1.684251 0.783841 0.001600 0.001959 1 adam 0.8 1000 (10,) 0.0001 logistic {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.878995 0.855784 0.658295 0.885464 0.928463 0.841400 0.094519 38
6 1.181934 0.302832 0.002016 0.001683 1 adam 0.2 1000 (50,) 0.1 relu {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.847412 0.848935 0.914764 0.891553 0.927702 0.886073 0.033046 35
7 1.068616 0.371016 0.006002 0.005190 1 adam 0.6 1000 (50,) 0.01 relu {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.933790 0.923135 0.934551 0.941781 0.898782 0.926408 0.015040 14
8 1.656553 1.022037 0.001974 0.003948 1 adam 0.7 1000 (10,) 0.0001 tanh {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.918569 0.745814 0.910578 0.882420 0.802892 0.852055 0.067045 37
9 1.091396 0.838221 0.003125 0.006250 1 adam 0.8 1000 (2,) 0.001 tanh {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.920091 0.378234 0.661720 0.624429 0.748478 0.666591 0.176589 46
10 0.912515 0.045592 0.003126 0.006251 1 adam 0.3 1000 (10,) 0.1 logistic {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.860731 0.833333 0.878234 0.885464 0.903729 0.872298 0.023869 36
11 2.010581 1.280232 0.005318 0.001361 1 adam 0.3 1000 (10,) 0.001 tanh {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.928082 0.923135 0.915525 0.924658 0.939498 0.926180 0.007824 15
12 2.233811 1.033752 0.008361 0.005154 1 adam 0.7 1000 (100,) 0.0001 tanh {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.935693 0.924658 0.934551 0.918950 0.936454 0.930061 0.007006 10
13 1.290798 0.473899 0.007852 0.006517 1 adam 0.1 1000 (100,) 0.001 relu {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.938737 0.943303 0.937976 0.893455 0.932648 0.929224 0.018201 11
14 1.942427 0.696230 0.008251 0.007044 1 adam 0.5 1000 (100,) 0.0001 logistic {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.923135 0.927702 0.915145 0.899924 0.912100 0.915601 0.009607 24
15 3.043670 1.184344 0.009297 0.006059 1 adam 0.5 1000 (100,) 0.0001 tanh {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.942922 0.931126 0.939498 0.939498 0.942542 0.939117 0.004251 4
16 2.765333 0.499217 0.006463 0.007492 1 adam 0.3 1000 (50, 50) 0.01 tanh {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.955860 0.936834 0.946728 0.949772 0.926941 0.943227 0.010206 3
17 1.107301 1.173637 0.002601 0.000803 1 adam 0.5 1000 (2,) 0.1 logistic {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.613394 0.705479 0.587900 0.619482 0.573059 0.619863 0.046010 50
18 1.065964 0.470854 0.007449 0.006764 1 adam 0.7 1000 (10,) 0.001 relu {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.917808 0.917047 0.921233 0.876332 0.843607 0.895205 0.030607 34
19 0.652047 0.423990 0.002420 0.002568 1 adam 0.2 1000 (10,) 0.0001 relu {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.919711 0.915906 0.887367 0.877093 0.932648 0.906545 0.020869 29
20 1.902487 0.493640 0.006297 0.007245 1 adam 0.8 1000 (10,) 0.01 tanh {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.847032 0.902207 0.867580 0.595510 0.910578 0.824581 0.116831 40
21 0.906319 0.498235 0.006250 0.007655 1 adam 0.1 1000 (10,) 0.001 relu {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.931507 0.921613 0.936834 0.845890 0.852359 0.897641 0.039966 33
22 1.552014 0.881292 0.003923 0.006052 1 adam 0.9 1000 (10,) 0.01 tanh {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.939878 0.881279 0.887747 0.922374 0.879376 0.902131 0.024471 31
23 0.997859 0.120950 0.004327 0.005810 1 adam 0.1 1000 (2,) 0.001 relu {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.500761 0.813927 0.500761 0.500000 0.796423 0.622374 0.149359 49
24 4.937795 0.702680 0.032503 0.006657 1 adam 0.2 1000 (100, 100) 0.001 logistic {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.925799 0.941020 0.904871 0.884703 0.920852 0.915449 0.019222 25
25 1.716515 0.813917 0.006511 0.006127 1 adam 0.5 1000 (50,) 0.01 tanh {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.938356 0.926560 0.915525 0.936454 0.923516 0.928082 0.008444 12
26 1.018627 0.388293 0.003125 0.006250 1 adam 0.7 1000 (10,) 0.1 relu {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.915906 0.923516 0.924658 0.929224 0.909817 0.920624 0.006894 18
27 2.612265 0.732541 0.017994 0.004555 1 adam 0.2 1000 (100,) 0.001 tanh {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.942922 0.928843 0.936454 0.938356 0.937595 0.936834 0.004560 5
28 2.858571 1.039561 0.010996 0.006223 1 adam 0.4 1000 (50,) 0.001 logistic {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.931887 0.928463 0.924658 0.923135 0.911720 0.923973 0.006843 17
29 1.513967 0.217781 0.006251 0.007656 1 adam 0.5 1000 (100,) 0.0001 relu {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.916286 0.876332 0.891172 0.936834 0.908295 0.905784 0.020793 30
30 1.785247 0.350186 0.008143 0.005113 1 adam 0.2 1000 (100,) 0.1 logistic {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.920472 0.904490 0.912861 0.921233 0.928082 0.917428 0.008068 21
31 1.135350 0.696967 0.001803 0.001472 1 adam 0.2 1000 (2,) 0.001 relu {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.738965 0.666667 0.839422 0.818874 0.893836 0.791553 0.079827 41
32 1.032605 0.405488 0.002820 0.002484 1 adam 0.4 1000 (10,) 0.1 relu {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.906773 0.887747 0.847032 0.923896 0.923896 0.897869 0.028709 32
33 4.146009 0.568740 0.024144 0.003244 1 adam 0.2 1000 (100, 100) 0.0001 logistic {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.933790 0.912100 0.936454 0.925419 0.915906 0.924734 0.009564 16
34 1.520054 0.814789 0.001215 0.001488 1 adam 0.9 1000 (2,) 0.01 logistic {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.625571 0.765982 0.738204 0.722603 0.748858 0.720244 0.049400 42
35 2.365715 0.568549 0.006260 0.007667 1 adam 0.3 1000 (100, 100) 0.1 relu {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.911339 0.940259 0.919330 0.868721 0.926180 0.913166 0.024166 26
36 1.165745 0.604283 0.001807 0.002234 1 adam 0.3 1000 (2,) 0.01 relu {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.736301 0.758371 0.749239 0.585236 0.543760 0.674581 0.091105 45
37 5.020190 1.479456 0.032723 0.007857 1 adam 0.7 1000 (100, 100) 0.01 tanh {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.945586 0.944825 0.962329 0.946347 0.946728 0.949163 0.006615 1
38 5.293991 1.319659 0.006253 0.007659 1 adam 0.4 1000 (100, 100) 0.1 relu {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.935312 0.915145 0.939878 0.917808 0.931126 0.927854 0.009730 13
39 3.986190 1.238446 0.007486 0.006749 1 adam 0.6 1000 (100, 100) 0.01 relu {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.931126 0.899924 0.914003 0.930365 0.914384 0.917960 0.011670 20
40 1.872689 0.766171 0.003126 0.006251 1 adam 0.4 1000 (2,) 0.1 relu {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.692542 0.679604 0.716134 0.858067 0.499239 0.689117 0.114394 44
41 2.915190 0.424272 0.019041 0.006174 1 adam 0.2 1000 (50, 50) 0.01 tanh {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.930365 0.932648 0.941781 0.931507 0.934551 0.934170 0.004049 6
42 3.034181 0.580042 0.015500 0.000252 1 adam 0.1 1000 (50, 50) 0.001 logistic {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.909817 0.900304 0.897260 0.931126 0.898402 0.907382 0.012672 28
43 1.997938 0.331038 0.001601 0.001961 1 adam 0.5 1000 (50,) 0.001 relu {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.942161 0.921233 0.910959 0.945967 0.931887 0.930441 0.012999 8
44 4.034305 1.436110 0.033360 0.006592 1 adam 0.8 1000 (100, 100) 0.01 logistic {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.921233 0.925038 0.918569 0.918569 0.912481 0.919178 0.004103 19
45 1.141009 0.277518 0.003125 0.006251 1 adam 0.3 1000 (2,) 0.0001 relu {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.932268 0.524353 0.503805 0.500000 0.652588 0.622603 0.164675 48
46 2.510586 0.863699 0.010978 0.006233 1 adam 0.5 1000 (50,) 0.1 logistic {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.934932 0.911720 0.910959 0.914384 0.910198 0.916438 0.009354 22
47 4.017716 1.703192 0.004125 0.006067 1 adam 0.2 1000 (2,) 0.001 tanh {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.913242 0.745814 0.935693 0.904490 0.630137 0.825875 0.118824 39
48 2.420970 0.637515 0.006946 0.005012 1 adam 0.5 1000 (50,) 0.001 tanh {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.923135 0.956240 0.945205 0.929224 0.909056 0.932572 0.016571 7
49 1.347153 0.606277 0.005126 0.005418 1 adam 0.6 1000 (2,) 0.0001 logistic {'verbose': 1, 'solver': 'adam', 'momentum': 0... 0.616058 0.685312 0.493912 0.563546 0.764460 0.624658 0.093943 47
In [42]:
permutations[['param_hidden_layer_sizes','param_activation','param_alpha','mean_test_score']]
Out[42]:
param_hidden_layer_sizes param_activation param_alpha mean_test_score
0 (50, 50) logistic 0.1 0.916438
1 (2,) tanh 0.0001 0.695967
2 (100, 100) logistic 0.01 0.930137
3 (100, 100) tanh 0.1 0.946651
4 (50,) relu 0.001 0.911948
5 (10,) logistic 0.0001 0.841400
6 (50,) relu 0.1 0.886073
7 (50,) relu 0.01 0.926408
8 (10,) tanh 0.0001 0.852055
9 (2,) tanh 0.001 0.666591
10 (10,) logistic 0.1 0.872298
11 (10,) tanh 0.001 0.926180
12 (100,) tanh 0.0001 0.930061
13 (100,) relu 0.001 0.929224
14 (100,) logistic 0.0001 0.915601
15 (100,) tanh 0.0001 0.939117
16 (50, 50) tanh 0.01 0.943227
17 (2,) logistic 0.1 0.619863
18 (10,) relu 0.001 0.895205
19 (10,) relu 0.0001 0.906545
20 (10,) tanh 0.01 0.824581
21 (10,) relu 0.001 0.897641
22 (10,) tanh 0.01 0.902131
23 (2,) relu 0.001 0.622374
24 (100, 100) logistic 0.001 0.915449
25 (50,) tanh 0.01 0.928082
26 (10,) relu 0.1 0.920624
27 (100,) tanh 0.001 0.936834
28 (50,) logistic 0.001 0.923973
29 (100,) relu 0.0001 0.905784
30 (100,) logistic 0.1 0.917428
31 (2,) relu 0.001 0.791553
32 (10,) relu 0.1 0.897869
33 (100, 100) logistic 0.0001 0.924734
34 (2,) logistic 0.01 0.720244
35 (100, 100) relu 0.1 0.913166
36 (2,) relu 0.01 0.674581
37 (100, 100) tanh 0.01 0.949163
38 (100, 100) relu 0.1 0.927854
39 (100, 100) relu 0.01 0.917960
40 (2,) relu 0.1 0.689117
41 (50, 50) tanh 0.01 0.934170
42 (50, 50) logistic 0.001 0.907382
43 (50,) relu 0.001 0.930441
44 (100, 100) logistic 0.01 0.919178
45 (2,) relu 0.0001 0.622603
46 (50,) logistic 0.1 0.916438
47 (2,) tanh 0.001 0.825875
48 (50,) tanh 0.001 0.932572
49 (2,) logistic 0.0001 0.624658
In [43]:
model.best_score_
Out[43]:
0.9491628614916285
In [44]:
model.best_params_
Out[44]:
{'verbose': 1,
 'solver': 'adam',
 'momentum': 0.7,
 'max_iter': 1000,
 'hidden_layer_sizes': (100, 100),
 'alpha': 0.01,
 'activation': 'tanh'}
In [49]:
model=MLPClassifier( hidden_layer_sizes= (50, 50),verbose= 1,
 solver= 'adam',
 momentum= 0.2,
 max_iter= 1000,
 batch_size= 1024,
 alpha= 0.01,
 activation= 'tanh')
In [50]:
model.fit(X_train, Y_train)
Iteration 1, loss = 0.54010544
Iteration 2, loss = 0.38810289
Iteration 3, loss = 0.30885027
Iteration 4, loss = 0.26655233
Iteration 5, loss = 0.23199738
Iteration 6, loss = 0.20563274
Iteration 7, loss = 0.19382205
Iteration 8, loss = 0.18441282
Iteration 9, loss = 0.16938142
Iteration 10, loss = 0.16783970
Iteration 11, loss = 0.16034588
Iteration 12, loss = 0.15814925
Iteration 13, loss = 0.15376169
Iteration 14, loss = 0.16410844
Iteration 15, loss = 0.16424977
Iteration 16, loss = 0.18003087
Iteration 17, loss = 0.17066182
Iteration 18, loss = 0.15588091
Iteration 19, loss = 0.14952368
Iteration 20, loss = 0.14172101
Iteration 21, loss = 0.13801174
Iteration 22, loss = 0.13684035
Iteration 23, loss = 0.13480396
Iteration 24, loss = 0.15390330
Iteration 25, loss = 0.13921181
Iteration 26, loss = 0.14738595
Iteration 27, loss = 0.13838393
Iteration 28, loss = 0.13041278
Iteration 29, loss = 0.12699007
Iteration 30, loss = 0.12404753
Iteration 31, loss = 0.13083598
Iteration 32, loss = 0.13150205
Iteration 33, loss = 0.12314194
Iteration 34, loss = 0.12921289
Iteration 35, loss = 0.12462821
Iteration 36, loss = 0.12063437
Iteration 37, loss = 0.11916440
Iteration 38, loss = 0.11901100
Iteration 39, loss = 0.11766723
Iteration 40, loss = 0.11851198
Iteration 41, loss = 0.12401068
Iteration 42, loss = 0.12173051
Iteration 43, loss = 0.11989336
Iteration 44, loss = 0.12055248
Iteration 45, loss = 0.12011102
Iteration 46, loss = 0.12024235
Iteration 47, loss = 0.11722095
Iteration 48, loss = 0.11396677
Iteration 49, loss = 0.11815455
Iteration 50, loss = 0.12299302
Iteration 51, loss = 0.11881238
Iteration 52, loss = 0.11971997
Iteration 53, loss = 0.12401968
Iteration 54, loss = 0.11895806
Iteration 55, loss = 0.11806694
Iteration 56, loss = 0.11642222
Iteration 57, loss = 0.11502894
Iteration 58, loss = 0.11483469
Iteration 59, loss = 0.11425903
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
Out[50]:
MLPClassifier(activation='tanh', alpha=0.01, batch_size=1024,
              hidden_layer_sizes=(50, 50), max_iter=1000, momentum=0.2,
              verbose=1)

Training Accuracy¶

In [53]:
from sklearn.metrics import accuracy_score
x_train_pred=model.predict(X_train)
train_data_accu=accuracy_score(x_train_pred,Y_train)
In [54]:
print(train_data_accu)
0.9627092846270928

Testing Accuracy¶

In [55]:
x_test_pred=model.predict(X_test)
print(x_test_pred)
[1 0 0 ... 1 0 0]
In [56]:
testing_data_accu=accuracy_score(x_test_pred,Y_test,)
testing_data_accu
Out[56]:
0.9625684723067559

Plotting Loss curve¶

In [68]:
loss_values = model.loss_curve_
epochs = range(1, len(loss_values)+1)
import matplotlib.pyplot as plt
plt.plot(epochs, loss_values, label='Training Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()

Printing the network¶

In [72]:
for i in range(len(model.coefs_)):
    print("Layer ", i, ":", model.coefs_[i].shape)
Layer  0 : (6, 50)
Layer  1 : (50, 50)
Layer  2 : (50, 1)

Classification report¶

In [57]:
from sklearn.metrics import accuracy_score, confusion_matrix,classification_report
print('\nConfusion Matrix is:\n',confusion_matrix(Y_test, x_test_pred)) 
print('\nClassification Report:\n',classification_report(Y_test, x_test_pred))
print('\nAccuracy:\n',accuracy_score(Y_test, x_test_pred))
Confusion Matrix is:
 [[1564   79]
 [  44 1599]]

Classification Report:
               precision    recall  f1-score   support

           0       0.97      0.95      0.96      1643
           1       0.95      0.97      0.96      1643

    accuracy                           0.96      3286
   macro avg       0.96      0.96      0.96      3286
weighted avg       0.96      0.96      0.96      3286


Accuracy:
 0.9625684723067559
In [242]:
#pip install yellowbrick
In [58]:
from yellowbrick.classifier import ClassificationReport

visualizer = ClassificationReport(model, classes=['0', '1'])

visualizer.fit(X_train, Y_train)
visualizer.score(X_test, Y_test)
visualizer.show()
C:\Users\msi\anaconda3\lib\site-packages\sklearn\base.py:450: UserWarning: X does not have valid feature names, but MLPClassifier was fitted with feature names
  warnings.warn(
Out[58]:
<AxesSubplot:title={'center':'MLPClassifier Classification Report'}>

Confusion matrix¶

In [59]:
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix
import seaborn as sns

cm = confusion_matrix(Y_test, x_test_pred)

plt.figure(figsize=(8, 6))
sns.heatmap(cm, annot=True, cmap='RdYlGn', fmt='g', cbar=False)
plt.title('Confusion matrix')
plt.xlabel('Predicted')
plt.ylabel('Actual')
plt.show()

Lets detect whether the payment is fraud or not¶

In [60]:
a = pd.DataFrame(
    {
    'type': [4],
    'amount': [223730.40],
    'oldbalanceOrg': [223730.40	],
    'newbalanceOrig': [0.00],
    'oldbalanceDest': [0.00],
    'newbalanceDest': [.00],
                        }
                       )
In [61]:
y_pred=model.predict(a)
In [62]:
y_pred
Out[62]:
array([1], dtype=int64)

saving the MLP model for future use¶

In [63]:
import joblib 
filename = "fraud_detection.joblib" 
joblib.dump(model, filename) 
Out[63]:
['fraud_detection.joblib']

Load the model¶

In [64]:
import joblib 
loaded_model = joblib.load(filename) 
a = pd.DataFrame(
    {
    'type': [1],
    'amount': [151685.06],
    'oldbalanceOrg': [0.00],
    'newbalanceOrig': [0.00],
    'oldbalanceDest': [568380.06],
    'newbalanceDest': [720065.12],
                        }
                       )
In [65]:
y_pred =loaded_model.predict(a)
In [66]:
y_pred
Out[66]:
array([0], dtype=int64)

Our selected Research paper at National Conference 2023¶

Here is the Link https://www.jrps.in/uploads/2023/ncasit-2023/38.pdf¶